Skip to main content

Interesting article:
http://seekingalpha. el-to-arm-chips

The main question regarding which standard will encroach on the other could be the issue of consumer preference for mobility vs power. Many of Apple's current Mac computers, including its iMac and MacBook Pro models, are high-powered machines for designed for users performing processor-heavy activities such as video editing. Nonetheless, most users do not end up doing video editing or computer programming, instead mostly using their computers for low-power functions like email and web browsing, making these computers substantially overpowered for their needs. These consumers may simply be better off choosing lower-powered and lower-priced Air models, but many consumers delight in knowing their device is the most powerful available model despite their actual usage.

Comments

anonymous Sat, 11/24/2012 - 11:03

ARM's history is BBC and Acorn. They always made great desktop chips, and RISCOS wasn't bad either. Steve Jobs saved ARM from the Acorn collapse, and it was Mr Jobs who steered the company towards making the best power-efficient chips, suitable for mobile devices and battery power. It was always true that the Acorn chips were able to get more performance out of fewer components, they usually had 1/10th to 1/50th of the transistor count of Intel, Motorola or IBM chips to achieve similar performance in the days of the Archimedes desktop computer. They're an extension of the Cambridge University research scene, which was heavily headhunted during the 1970s by Bell Labs to build UNIX and C++ when the UK government funding was cut as a result of the gigantic government sell-off at the behest of the IMF following the 1976 IMF loans. For the previous 4 decades, Cambridge was (and still is) one of the places turning out the best work in computer science.

http://en.wikipedia.org/wiki/Acorn_Computers
http://en.wikipedia.org/wiki/ARM_Holdings

Apple only left behind RISC technology and switched to Intel's post-pentium-4 "RISC with an x86 CISC shell" system becuase IBM sold out their G5 technology to put into Xbox. IBM had finally cut off apple's supply of CPUs after years of keeping them out of the laptop market, by selling the CPUs to Microsoft for a *games console*. After the Xbox deal, Apple couldn't ship G5s in any kind of volume, IBM just wouldn't sell them enough chips. The other big issue was that IBM's Power technology http://en.wikipedia.org/wiki/POWER6 http://en.wikipedia.org/wiki/POWER7 would never fit into a mobile device, the computers they typically build using the Power6 or Power7 technology have CPUs the size of medieval bibles http://www.engadget.com/2010/07/23/ibms-zenterprise-architecture-makes-mainframes-cool-again-also/. It's heavy duty mainframe type technology. Needless to say the performance is wonderful, and banks and intelligence agencies are willing to pay big buck for them.

The trouble really started when Motorola, Apple's CPU manufacturer since the early 1980s, were split up at the behest of Blackstone/Carlyle's takeover, with the semiconductor division being spun-off into a research-free company called Freescale. That was what really put the nail in the coffin for the PowerPC technology, luckily Apple had an insurance policy called Marklar, an Intel build of OSX. I'm still convinced that G5s ran smoother and faster GHz for GHz than the intel machines. Today you'll find the same 1.6GHz G4 chips they were putting in macs running the valve timing on car engines. That's what Freescale's new management thought was the future for the company, not making cutting edge CPUs for the likes of Apple. Motorola are today a military-only contractor. There's more 'margin' in that market. Look up Carlyle, and you'll see what sort of priorities they have.

ARM doubtless would (will) design some incredible desktop powered chips, fully capable of video editing and high-powered tasks.

It's also entirely plausible that apple will build their own fab, and that the future machines will be some sort of system-on-a-chip. This could all be the best thing since sliced bread.

Unfortunately, the rather dodgy majority shareholders in Apple today are private equity companies like Blackstone (seeing a pattern here?), and I'm inclined to believe that the platform shift to a "desktop iOS" and ARM kernel will come with some very serious repercussions in the form of Gatekeeper and JAIL for the desktop plaftorm. Consumerised Lockout Syndrome where apple are your administrator, and all software is vetted by them, and 30% of all software revenue is taxed by them.

If they do that, it'll be the killing of the goose that laid the golden egg, and all the proper serious computer users will run away to OpenBSD or Linux, or perhaps some new platform. Ultimately the hacker ethic is being lost quite quickly at Apple and Microsoft, as they get into controlling software marketplaces as app-store taxmen and software censors.

Bertrand Serlet, VP of Computer Science and Operating Systems left a few years back, he'd been with NeXT since the beginning, he was basically the architect of OSX. He left due to the iOS brain-drain, saying "I'm not intersted in making products or applicances, I do computer science".

They just fired Scott Forstall, too, he was head of iOS since the beginning. Guess who is now in charge of interface design? Johnny Ive. A product designer who (AFAIK) doesn't have a computer science degree, although I'm sure he knows a thing or two from being at apple for like 20 years. Still, he's a product designer, not a computer scientist, and those are different disciplines..

anonymous Sat, 12/22/2012 - 19:21

[="http://www.xbitlabs.com/news/cpu/display/20120921010327_Nvidia_Develops_High_Performance_ARM_Based_Boulder_Microprocessor_Report.html"]Nvidia Develops High-Performance ARM-Based "Boulder" Microprocessor - Report - X-bit labs[/]="http://www.xbitlabs…"]Nvidia Develops High-Performance ARM-Based "Boulder" Microprocessor - Report - X-bit labs[/]

[[url=http://="http://www.theregis…"]Can't wait for Nvidia? Try these Italian baby ARM clusters with GPU options ? The Register[/]="http://www.theregis…"]Can't wait for Nvidia? Try these Italian baby ARM clusters with GPU options ? The Register[/]

http://www.pcworld.com/article/2013429/dell-testing-64bit-arm-server-with-chip-from-appliedmicro.html

System on a Chip 64bit servers:

anonymous Sun, 12/23/2012 - 16:10

Well, the idea of a System-on-a-chip architecture is the point with ARM.

To take advantage of this for workstation users, Apple would need to implement some sort of clustering or grid system into the OS. I'd point the finger at Grand Central Dispatch, and also point out that apple's previous grid system, Xgrid, was recently discontinued, so the likelihood is that they've got something cooking which replaces that on a system daemon level (grand central most likely). If you look at Mach, the berklee university project that the OSX kernel came from, they were very interested in developing those sorts of grid architectures.

Multi-channel Audio is one of those tasks that can easily be split up among a bunch of computers, apple's Logic already has the Node system for running multiple computers over ethernet.

I'd be inclined to guess that apple's next long-term move in high performance computing (workstations, servers etc) would be a "cloud" architecture, where the operating system platform is abstracted, floating on top of an aggregate network of computers, and a scheduler distributes instructions over multiple compute nodes, and that low-latency systems such as audio or video processing would run a native low-latency application on each compute node.

Of course, it could get really scary, where apple supply thin client interfaces, and the actual guts of the system are buried in a datacenter that you have to pay rent on, where "applications" work in the 'cloud', ie apple's timesharing multinodal mainframe, and that nobody has physical access to in order to jailbreak/liberate the features. One hopes that you'll still be able to buy your own machines! One good example of that already happening is Siri.

anonymous Sun, 12/23/2012 - 16:48

The "thin client" system would never work for me. It's got too many problems, largely to do with the unreliability and slowness of a WAN (wide-area-network). Aside from the fact that it lays one open to all sorts of administration by people who don't honestly care about your business, and the issues of rent-seeking economics and lock-out. Developers could just change features, introduce bugs. It'd be a nightmare for a professional user.

I do, however, have faith in the idea of running a cluster in-house over a LAN. A rack of various computers all connected together with 10gigabit ethernet. That's not uncommon these days in the scientific scene (and also 3D visual effects scene), and I don't see why people who work with media wouldn't benefit from going down that route. Clustering has been viable for over a decade now, and the only reason why we're not already running desktop clusters is the fact that it requires a substantial change in platform architecture and a rewrite of most of the software applications.

However, OSX has been designed to be able to work that way from the start, hence Mr Jobs's statement that the technology in OSX would be good for over 35 years into the future. The issue has been transitioning the Macintosh software from Carbon over to Cocoa, something that Adobe for example only just got around to. Lots of software up to this point wasn't actually originally written for OSX.

Microsoft have also developed a multi-nodal build of windows (mainly for running datacenters and websites, but also capable of doing "virtual machines", potentially for desktop/clustering/HPC use) called Azure. [="http://en.wikipedia…"]http://en.wikipedia…

[/][h=2][/h]

Windows Azure uses a specialized operating system, called Windows Azure, to run its "fabric layer" — a cluster hosted at Microsoft's datacenters that manages computing and storage resources of the computers and provisions the resources (or a subset of them) to applications running on top of Windows Azure. Windows Azure has been described as a "cloud layer" on top of a number of Windows Server systems, which use Windows Server 2008 and a customized version of Hyper-V, known as the Windows Azure Hypervisor to provide virtualization of services. Scaling and reliability are controlled by the Windows Azure Fabric Controller so the services and environment do not crash if one of the servers crashes within the Microsoft datacenter and provides the management of the user's web application like memory resources and load balancing.

x

User login