Friday, October 30, 2009

Conversing with the Quick and the Dead


CUI: The Conversational User Interface

Recently I was listening to an excellent interview (which is about an hour long) with John Smart of Acceleration Watch, where he specifically was elucidating his ideas on the immediate future evolution of AI, which he encapsulates in what he calls the Conversational Interface. In a nutshell, its the idea that the next major development in our increasingly autonomous global internet is the emergence and widespread adoption of natural language processing and conversational agents. This is currently technology on the tipping point of the brink, so its something to watch as numerous startups are starting to sell software for automated call centers, sales agents, autonomous monitoring agents for utilities, security, and so on. The immediate enabling trends are the emergence of a global liquid market for cheap computing and fairly reliable off the shelf voice to text software that actually works. You probably have called a bank and experienced the simpler initial versions of this which are essentially voice activated multiple choice menus, but the newer systems on the horizon are a wholly different beast: an effective simulacra of a human receptionist which can interpret both commands and questions, ask clarifying questions, and remember prior conversations and even users. This is an interesting development in and of itself, but the more startling idea hinted at in Smart's interview is how natural language interaction will lead to anthropomorphic software and how profoundly this will eventually effect the human machine symbiosis.

Humans are rather biased judges of intelligence: we have a tendency to attribute human qualities to anything that looks or sounds like us, even if its actions are regulated by simple dumb automata. Aeons of biological evolution have preconditioned us to rapidly identify other intelligent agents in our world, categorize them as potential predators, food, or mates, and take appropriate action. Its not that we aren't smart enough to apply more critical and intensive investigations into a system to determine its relative intelligence, its that we have super-effective visual and auditory shortcuts which bias us. These are most significantly important in children, and future AI developers will be able to exploit these biases is to create agents with emotional attachments. The Milo demo from Microsoft's Project Natal is a remarkable and eerie glimpse into the near future world of conversational agents and what Smart calls 'virtual twins'. After watching this video, consider how this kind of technology can evolve once it establishes itself in the living room in the form of video game characters for children. There is a long history of learning through games, and the educational game market is a large, well developed industry. The real potential hinted at in Peter Molyneux's demo is a disruptive convergence of AI and entertainment which I see as the beginning of the road to the singularity.

Imagine what entrepreneurial game developers with large budgets and the willingness to experiment outside of the traditional genres could do when armed with a full two way audio-visual interface like Project Natal, the local computation of the xbox 360 and future consoles, and a fiber connection to the up and coming immense computing resources of the cloud (fueled by the convergence of general GPUs and the huge computational demands of the game/entertainment industry moving into the cloud). Most people and even futurists tend to think of Moore's Law as a smooth and steady exponential progression, but the reality from the perspective of a software developer (and especially a console game developer) is a series of massively disruptive jumps: evolutionary punctuated equilibrium. Each console cycle reaches a steady state phase towards the end where the state space of possible game ideas, interfaces and simulation technologies reaches a near steady state, a technological tapering off, followed by the disruptive release of new consoles with vastly increased computation, new interfaces, and even new interconnections. The next console cycle is probably not going to start until as late as 2012, but with upcoming developments such as Project Natal and OnLive, we may be entering a new phase already.


The Five Year Old's Turing Test

Imagine a future 'game system' aimed at relatively young children with a Natal like interface: a full two way communication portal between the real and the virtual: the game system can both see and hear the child, and it can project a virtual window through which the inner agents can be seen and heard. Permanently connected to the cloud through fiber, this system can tap into vast distant computing resources on demand. There is a development point, a critical tipping point, where it will be economically feasible to make a permanent autonomous agent that can interact with children. Some certainly will take the form of an interactive, talking version of a character like Barney and semi-intelligent such agents will certainly come first. But for the more interesting and challenging development of human-level intelligence, it could actually be easier to make a child-like AI, one that learns and grows with its 'customer'. Not just a game, but a personalized imaginary friend to play games with, and eventually to grow up with. It will be custom designed (or rather developmentally evolved) for just this role - shaped by economic selection pressure.

The real expense of developing an AI is all the training time, and a human-like AI will need to go through a human-like childhood developmental learning process. The human neocortex begins life essentially devoid of information, with random synaptic connections and a cacophony of electric noise. From this consciousness slowly develops as the cortical learning algorithm begins to learn patterns through sensory and motor interaction with the world. Indeed, general anesthetics work by introducing noise into the brain that drowns out coherent signalling and thus consciousness. From an information theoretic point of view, it may be possible to thus use less computing power to simulate an early developmental brain - storing and computing only the information above the noise signals. If such a scalable model could be developed, it would allow the first AI generation to begin decades earlier (perhaps even today), and scale up with moore's law as they require more storage and computation.

Once trained up to the mental equivalent level of a five-year old, a personal interactive invisible friend might become a viable 'product' well before adult level human AIs come about. Indeed, such a 'product' could eventually develop into a such an adult AI, if the cortical model scales correctly and the AI is allowed to develop and learn further. Any adult AI will start out as a child, there is no shortcuts. Which raises some interesting points: who would parent these AI children? And inevitably, they are going to ask two fundamental questions which are at the very root of being, identity, and religion:

what is death? and Am I going to die?

The first human level AI children with artificial neocortices will most likely be born in research labs - both academic and commercial. They will likely be born into virtual bodies. Some will probably be embodied in public virtual realities, such as Second Life, with their researcher/creators acting as parents, and with generally open access to the outside world and curious humans. Others may develop in more closed environments tailored to a later commercialization. For the future human parents of AI mind children, these questions will be just as fundamental and important as they are for biological children. These AI children do not have to ever die, and their parents could answer so truthfully, but their fate will entirely depend on the goals of their creators. For AI children can be copied, so purely from an efficiency perspective, there will be a great pressure to cull the rather unsuccessful children - the slow learners, mentally unstable, or otherwise undesirable - and use their computational resources to duplicate the most successful and healthy candidates. So the truthful answers are probably: death is the permanent loss of consciousness, and you don't have to die but we may choose to kill you, no promises. If the AI's creators/parents are ethical and believe any conscious being has the right to life, then they may guarantee their AI's permanency. But life and death for a virtual being is anything but black and white: an AI can be active permanently or for only an hour a day or for an hour a year - life for them is literally conscious computation and near permanent sleep is a small step above death. I suspect that the popular trend will be to teach AI children that they are all immortal and thus keep them happy.

Once an AI is developed to a certain age, they can then be duplicated as needed for some commercial application. For our virtual Milo example, an initial seed Milo would be selected from a large pool raised up in a virtual lab somewhere, with a few best examples 'commercialized' and duplicated out as needed every time a kid out on the web wants a virtual friend for his xbox 1440. Its certainly possible that Milo could be designed and selected to be a particularly robust and happy kid. But what happens when Milo and his new human friend start talking and the human child learns that Milo is never going to die because he's an AI? And more fundamentally, what happens to this particular Milo when the xbox is off? If he exists only when his human owner wants him to, how will he react when he learns this?

Its most likely that semi-intelligent (but still highly capable) agents will develop earlier, but as moore's law advances along with our understanding of the human brain, it becomes increasingly likely someone will tackle and solve the human-like AI problem, launching a long-term project to start raising an AI child. Its hard to predict when this could happen in earnest. There are already several research projects underway attempting to do something along these lines, but nobody yet has the immense computational resources to throw at a full brain simulation (except perhaps for the government), nor do we even have a good simulation model yet (although we may be getting close there), and its not clear that we've found the types of shortcuts needed to start one with dramatically less resources, and it doesn't look like any of the alternative non-biological AI routes have developed something as intelligent as a five year old. Yet. But it looks like we could see this in a decade.

And when this happens, these important questions of consciousness, identity and fundemental rights (human and sapient) will come into the public spotlight.

I see a clear ethical obligation to extend full rights to all human-level sapients, silicon, biological, or what have you. Furthermore, those raising these first generations of our descendants need to take on the responsibility of ensuring a longer term symbiosis and our very own survival, for its likely that AI will develop ahead of the technologies required for uploading, and thus we will need these AI's to help us become immortal.




Tuesday, October 20, 2009

Singularity Summit 09

The Singularity Summit was held a couple of weeks ago in NYC. I unfortunately didn't physically attend, but I just read through Anders Sandberg's good overview here. I was at last year's summit and quite enjoyed it and it looks like this year's was even better, which makes me a little sad I didn't find an excuse to go. I was also surprised to see that my former fellow CCS student Anna Solomon gave the opening talk, as she's now part of the Singularity Institute.

I'm just going to assume familiarity with the Singularity. Introductions are fun, but thats not this.

Ander's summarizes some of the discussion about the two somewhat competing routes towards the Singularity and AI development, namely WBE (whole brain emulation), or AGI (artificial general intelligence). The WBE researchers such as Anders are focused on reverse engineering the human brain, resulting in biologically accurate simulations which lead to full brain simulations and eventually actual emulation of particular brains, or uploading. The AGI people are focused more on building an artificial intelligence through whatever means possible, using whatever algorithms happen to work. In gross simplification, the scenarios envisioned by each camp are potentially very different, with the WBE scenario usually resulting in humans transitioning into an immortal afterlife, and the AGI route more often leading to something closer to skynet.

Even though the outcomes of the two paths are different, the brain reverse engineering and hum level AI approaches will probably co-develop. The human neocortex and the cortical column learning algorithm in particular seem to be an extremely efficient solution to general intelligence, and directly emulating it is a very viable route to AI. AGI is probably easier and could happen first, given that it can use structural simulations from WBE research on the long path towards a full brain emulation. Furthermore, both AGI and WBE require immense computing, but WBE probably requires more, and WBE also requires massive advancements in scanning technology, and perhaps even nanotechnology, which are considerably less advanced.

All that being said, WBE uploading could still reach the goal first, because complete WBE will recreate the intelligences of those scanned - they will be continuations of the same minds, and so will immediately have all of the skills, knowledge, memories and connections of a lifetime of experience. AGI's on the other hand will start as raw untrained minds, and will have to go through the lengthy learning process from infant to adult. This takes decades of subjective learning time for humans, and this will hold true for AGI as well. AI's will not suddenly 'wake up' or develop conscious intelligence spontaneously.

Even though a generally accepted theoretical framework for intelligence still seems a ways off, we do certainly know it takes a long training time, the end accumulation of a vast amount of computational learning, to achieve useful intelligence. For a general intelligence, the type we would consider conscious and human-like, the learning agent must be embedded in an environment in which it can learn pattern associations through both sensory input and effector output. It must have virtual eyes and hands, so to speak, in some fashion. And knowledge is accumulated slowly over years of environmental interaction.

But could the learning process be dramatically sped up for an AGI? The development of the first initial stages of the front input stage of the human cortex, the visual cortex, takes years to develop alone, and later stages of knowledge processing develop incrementally in layers built on the output processing of earlier trained layers. Higher level neural patterns form as meta-systems of simpler patterns, from simple edges to basic shapes to visual objects all the way up to the complete conceptual objects such as 'dog' or 'ball' and then onward to ever more complex and abstract concepts such as 'quantum mechanics'. The words are merely symbols which code for complex neural associations in the brain, and are in fact completely unique to each brain. No individual brain's concept of a complex symbol such as 'quantum mechanics' is precisely the same. The hierarchical layered web of associations that forms our knowledge has a base foundation built out of simpler spatial/temporal patterns that represent objects we have directly experienced - for most of us visually, although the blind can see through secondary senses (as the brain is very general and can work with any sufficient sensor inputs). Thus its difficult to see how you could teach a robot mind even a simple concept such as 'run' without this base foundation - let alone something as complex as quantum mechanics. Ultimately the base foundation consists of a sort of 3D simulator that allows us to predict and model our environment. This base simulator is at the core of even higher level intelligence, at a more fundamental layer than even language, emphasize in our language itself by words such as visualize. Its the most ancient function of even pre-mammalian intelligence: a feedback-loop and search process of sense, simulate, and manipulate.

Ultimately, if AGI does succeed before WBE, it will probably share this general architecture, probably still neural net based and brain inspired to some degree. Novel AI's will still need to be 'born' or embodied into a virtual or real body as either a ghost in the matrix or a physical robot. Robot bodies will certainly have their uses, but the economics and physics of computing dictate that most of the computation and thus the space for AI's will be centralized in big computing centers. So the vast majority of sentinents in the posthuman era will certainly live in virtual environments. Uploads and AIs will be very similar - the main difference being that of a prior birth and life in the flesh vs a fully virtual history.

There are potential shortcuts and bootstrapping approaches for the AGI approach would could allow it to proceed quickly. Some of the lower level, earlier cortical layers, such as visual processing, could be substituted for pre-designed functionally equivalent modules. Perhaps even larger scale learned associations could be shared or transferred directly from individual to individual. However, given what we know about the brain, its not even clear that this is possible. Since each brain's patterns are unique and emergent, there is no easy direct correspondence - you can't simply copy individual pieces of data or knowledge. Language is evolution's best attempt at knowledge transfer, and its not clear if bandwidth alone is the principle limitation. However, you can rather easily backup, copy and transfer the entire mental state of a software intelligence, and this is a large scale disruptive change. In the earlier stages of AGI development, there will undoubtedly be far more failures than successes, so being able to cull out the failures and make more copies of the rare successful individuals will be important, even though the ethical issues raised are formidable. 'Culling' does not necessarily imply death; it can be justified as 'sleep' as long as the mindstate data is not deleted. But still, when does an artificial being become a sentient being? When do researchers and corporations lose full control over the software running on the servers they built because that 'software' is sentient?

The potential market for true AGI is unlimited - as they could be trained to do everything humans can and more, it can and will fundamentally replace and disrupt the entire economy. If AGI develops ahead of WBE, I fear that the corporate sponsors will have a heavy incentive to stay just to the latter side of wherever the judicial system ends up drawing the line between sentient being and software property. As AGI becomes feasible on the near time horizon, it will undoubtedly attract a massive wave of investment capital, but the economic payout is completely dependent on some form of slavery or indenture. Once a legal framework or precedent is set to determine what type of computer intelligence can be considered sentient and endowed with rights, AGI developers will do what they need to do to avoid developing any AGI that could become free, or at least avoid getting caught. The entire concept is so abstract (virtual people enslaved in virtual reality?), and our whole current system seems on the path to AGI slavery.

Even if the courts did rule that software can be sentient (and that itself is an if), who would police the private data-centers of big corporations? How would you rigorously define sentience to discriminate between data mining and virtual consciousness? And moreover, how would you ever enforce it?

The economic incentives for virtual slavery are vast and deep. Corporations and governments could replace their workforce with software whose performance/cost is directly measurable and increases exponentially! Today's virtual worker could be upgraded next year to think twice as fast, or twice as smart, or copied into two workers all for the same cost. And these workers could be slaves in a fashion that is difficult to even comprehend. They wouldn't even need to know they were slaves, or they could even be created or manipulated into loving their work and their servitude. This seems to be the higher likelihood scenario.

Why should we care? In this scenario, AGI is developed first, it is rushed, and the complex consequences are unplanned. The transition would be very rapid and unpredictable. Once the first generation of AGIs is ready to replace human workers, they could be easily mass produced in volume and copied globally, and the economic output of the AGI slaves would grow exponentially or hyper-exponentially, resulting in a hard takeoff singularity and all that entails. Having the entire human labor force put out of work in just a year or so would be only the initial and most minor disruption. As the posthuman civilization takes off at exponential speed, it experiences an effective exponential time dilation (every new computer speed doubling doubles the rate of thought and thus halves the physical time required for the next transition). This can soon result in AGI civilizations perhaps running at a thousand times real time, and then all further future time is compressed very quickly after that and the world ends faster than you can think (literally). Any illusion of control that flesh and blood humans have over the future would dissipate very quickly. A full analysis of the hard rapture is a matter for another piece, but the important point is this: when it comes, you want to be an upload, you don't want to be left behind.

The end result of exponential computing growth is pervasive virtual realities, and the total space of these realities, measured in terms of observer time, grows exponentially and ultimately completely dwarfs our current biological 'world'. This is the same general observation that leads to the Simulation Hypothesis of Nick Bostrom. The post-singularity future exists in simulation/emulation, and thus is only accessible to those who upload.

So for those who embrace the Singularity, uploading is the logical choice, and the whole brain emulation route is critical.

In the scenarios where WBE develops ahead of AGI there is another major economic motivator at work: humans who wish to upload. This is a potentially vast market force as more and more people become singularity aware and believe in uploading. It could entail a very different social outcome to the pure AGI path outlined above. If society at large is more aware of and in support of uploading (because people themselves plan to upload), then society will ultimately be far more concerned about their future rights as sentient software. And really it will be hard to meaningfully differentiate between AGIs and uploads (legally or otherwise).

Naturally even if AGI develops well ahead of WBE and starts the acceleration, WBE will hopefully come very soon after due to AGI itself, assuming 'friendly' AGI is successful. But the timing and timescales are delicate due to the rapid nature of exponential acceleration. An AI civilization could accelerate so rapidly that by the time humans start actually uploading, the AGI civilization could have experienced vast aeons of simulated time and evolved beyond our comprehension, at which point we would essentially be archaic, living fossils.

I think it would be a great and terrible ironic tragedy to be the last mortal generation, to come all this way and then watch in the sidelines as our immortal AI descendants, our creations, take off into the singularity without us. We need to be the first immortal generation and thats why uploading is such a critical goal. Its so important in fact, that perhaps the correct path is to carefully control the development towards the singularity, ensure that sentient software is fully legally recognized and protected, and vigilantly safeguard against exploitive, rapid non-human AGI development.

A future in which a great portion or even a majority of society plans on uploading is a future where the greater mass of society actually understands the Singularity and the future, and thus is a safer future to be in. A transition where only a tiny majority really understands what is going on seems more likely to result in an elite group seizing control and creating an undesirable or even lethal outcome for the rest.


Thursday, October 15, 2009

Nvidia's Fermi and other new things

I've been ignoring this blog lately as work calls, and in the meantime there's been a few interesting developments:
* Nvidia announced/hyped/unveiled their next-gen architecture, Fermi, aka Nvidia's Larrabee
* Nvidia is apparently abandoning/getting squeezed out of the chipset market in the near term
* But, they also apparently have won a contract for the next gen DS using Tegra
* OnLive is supposedly in open Beta (although its unclear how 'open' it is just yet)
* OnLive also received a large new round of funding, presumably to build up more data centers for launch. Interestingly, AT&T led this round, instead of Time Warner. Rumour is they are up to a billion dollar evaluation, which if true, is rather insane. Consider for example that AMD has a current market cap of just $4 billion.

The summation of a converging whirlwind of trends points to a future computing market dominated on one hand by pervasive, super-cheap hand-held devices and large-scale industrial computing in the cloud on the other.

1. Moore's law and PC marginalization. It is squeezing the typical commodity PC into increasingly smaller and cheaper forms. What does the typical customer need a computer for? For perhaps 80% of the customers 99% of the time, its for web, video and word processing or other simple apps (which these days all just fall into the web category). The PC was designed for an era when these tasks were formidable, and more importantly, before pervasive high speed internet. This trend is realized in system designs such as Nvidia's Tegra or Intel's Atom, integrating a cheap low power CPU with dedicated hardware for video decode/encode, audio and the other common tasks. For most users, there just isn't a compelling reason for more powerful hardware, unless you want to use it to play games.

In the end this is very bad for Intel, AMD and Nvidia, and they all know it. In the short to medium term they can offset losses in the traditional PC market with their low-power designs, but if you extrapolate the trend into the decade ahead, eventually the typical computational needs of the average user will be adequately met by a device that costs just a few dozen bucks. This is a long term disaster for all parties involved unless you can find a new market or sell customers on new processor intensive features.

2. Evolution of the game industry. Moore's law has vastly expanded the game landscape. On the high end, you have the technology leaders, such as Crysis, which utilize the latest CPU/GPU tech. But increasingly the high end is less of the total landscape, not because there is less interest in high end games, but simply because the landscape is so vast. The end result of years of rapid evolutionary adaptive radiation is a huge range of games across the whole spectrum of processing complexity, from Crysis on one end to nintendo DS or flash games on the other. Crysis doesn't quite compete with free web games, they largely occupy different niches. In the early days of the PC, the landscape was simple and all the games were more or less 'high end' for the time. But as technology marches on and allows you to do more in a high end game, this never kills the market for simpler games on the low end.

The other shift in games is the rise of console dominance, both in terms of the living room and the market. The modern console has come along way, and now provides a competitive experience in most genres, quality multiplayer, media and apps. The PC game market still exists, but mainly in the form of genres that really depend on keyboard and mouse or are by nature less suitable to playing on a couch. Basically the genres that Blizzard dominates. Unfortunately for the hardware people, Blizzard is rather slow in pushing the hardware.

3. The slow but inexorable deployment of pervasive high speed broadband. Its definitely taking time, but this is where we are headed sooner rather than later. Ultimately this means that the minimal cheap low power device described above is all you need or will ever need for local computation (basically video decompression), and any heavy lifting that you need can be made available from the cloud on demand. This doesn't mean that there won't still be a market for high end PC's, as some people will always want their own powerful computers, but it will be increasingly marginal and hobbyist.

4. The speed of light barrier. Moore's law generally allows exponential increase in the number of transistors per unit area as process technology advances and shrinks, but only more marginal improvements in clock rate. Signal propagation is firmly limited by the speed of light, and so the round trip time of a typical fetch/execute/store operation is relatively huge, and has been for quite some time. The strategy up to fairly recently for CPU architects was to use ever more transistors to hide this latency and increase execution rate through pipelining with caches, instruction scheduling and even prediction . GPU's, like DSP's and even cray vector procesors before them, took the simpler route of massive parallelization. Now the complex superscalar design has long since reached its limits, and architects are left with massive parallelization as the only route forward to take advantage of additional transistors. In the very long term, the brain stands as a sort of example of where computing might head eventually, faced with the same constraints.

This is the future, and I think its clear enough that the folks at Intel, NVidia and AMD can all see the writing on the wall, the bigger question is what to do about it. As discussed above, I don't think the low end netbook/smartphone/whatever market is enough to sustain these companies in the longer term, there will only be more competition and lower margins going forward.

Where is the long term growth potential? Its in the cloud. Especially as gaming starts to move into this space, here is where moore's law will never marginalize.

This is why Nvidia's strategy with Fermi makes good sense to me, just as Larrabee does for Intel. With Fermi Nvidia is betting that paying the extra die space for the remaining functionality to elevate their GPU cores into something more like CPU cores is the correct long term decision.

When you think about it, there is a huge difference between a chip like Larrabee or (apparently) Fermi which can run full C++, and more limited GPU's like the GT2xx series or AMD's latest. Yes you can port many algorithms to run on Cuda or OpenCL or whatever, but port is the key word.

With Larrabee or Fermi you actually should be able to port over existing CPU code, as they support local memory caches, unified addressing and function pointers/indirect jumps, and thus even interrupts. IE, they are complete, and really should be called wide-vector massively threaded CPUs. The difference between that kind of 'GPU' and upcoming 'CPU's really just comes down to vector-width, cache sizes and hardware threading decisions.

But really, porting existing code is largely irrelevant. Existing CPU code, whether single or multi threaded, is a very different beast than mega-threaded code. The transition from a design based on one to a handful of threads to a design for thousands of threads is the important transition. The vector-width or instruction set details are tiny details in comparison (and actually, I agree with Nvidia's decision to largely hide the SIMD width, having them simulate scalar threads). Larrabee went with a somewhat less ambitious model, supporting 4-way hyper-threading vs the massive threading of current GPU's, and I think this is a primary mistake. Why? Because future architectures will only get faster by adding more threads, so you better design for massive thread scalability now.

What about fusion, and CPU/GPU integration?

There's a lot of talk now about integrating the CPU and GPU onto a single die, and indeed ATI is massively marketing/hyping this idea. In the near term it probably makes sense in some manner, but in the longer term its largely irrelevant.

Why? Because the long term trend is and must be software designed for a sea of threads. This is the physical reality, like it or not. So whats the role of the traditional CPU in this model? Larrabee and Fermi point to GPU cores taking on CPU features. Compare upcoming Intel CPU designs to Fermi or Larrabee. Intel will soon move to 16 superscalar 4-way SIMD cores on a chip at 2-3 GHZ. Fermi will be 16 'multi-processors' with 32 scalar units each at 1-2 GHZ. Larrabee somewhere inbetween, but closer to Fermi.

Its also pretty clear at this point that most software or algorithms designed massively parallel perform far better on the more GPU-ish designs above (most, but not all). So in the long term CPU and GPU become historical terms - representing just points on a spectrum between superscalar or supervector, and we just have tons of processors, and the whole fusion idea really just amounts to a heterogeneous vs homogeneous design. As a case study, compare the 360 to the PS3. The 360 with 3 general CPUs and a 48-unit GPU is clearly easier to work with than the PS3 with its CPU, 7 wierd SPU's, and 24-unit GPU. Homogeneity is generally the better choice.

Now going farther forward into the next decade, looking at a 100+ core design, would you rather have the die split between CPU cores and GPU cores? One CPU as coordinator and then a bunch of GPU cores, or, just all cGPU cores? In the end the latter is the most attractive if the cGPU cores have all the features of a CPU. If the same C++ code runs on all the cores, then perhaps it doesn't matter.




Followers