This part of the blog is where I quote other people, tech news etc agreeing with some of the things I say in my articles…


Optical imaging method shows brain multiplexing

March 25, 2011 by Editor
Brain-ImageVisualization of how the primary visual cortex encodes both orientation and retinotopic motion of a visual object simultaneously (Image credit: Dirk Jancke)

Researchers have developed a real-time optical imaging method that exploits a specific voltage-sensitive dye to demonstrate brain multiplexing in the visual cortex, says Dr. Dirk Jancke, neuroscientist at the Ruhr-University in Bochum, Germany.

Neurons synchronize with different partners at different frequencies. Optical imaging allows fine grained resolution of cortical pattern activity maps in which local groups of active nerve cells represent grating orientation. A particular grating orientation activates different groups of nerve cells resulting in unique patchy patterns.

The researchers used simple oriented gratings with alternating black-and-white stripes drifting at constant speed across a monitor screen. They detected brain activity that signals both the grating’s orientation and its motion simultaneously.

They used a voltage-sensitive dye that changed fluorescence whenever nerve cells receive or send electrical signals. High resolution camera systems simultaneously captured the activities of millions of nerve cells across several square millimeters of a brain.

The study showed that motion direction and speed can be estimated independently from orientation maps.  This resolves ambiguities occurring in visual scenes of everyday life, and starts to show how the brain handles complex data to create a stable perception at a given moment of time, says Jancke.


Computer scientists at the Air Force Research Lab in Rome, NY, have assembled one of the world’s largest, fastest, and cheapest supercomputers by linking together 1,716 PlayStation 3s, says Mark Barnell, director of high-performance computing at the Air Force Research Lab.

The supercomputer can scan or process text in any language at 20 pages per second, fill in missing sections it has never seen with 99.9 percent accuracy, and tell the user whether the information is important.

Video processed from radar signals, including ground-based radar images of space objects, can be viewed in real time or played back to investigate what led to an event. A viewer can change perspectives, going from air to ground to look around buildings.

The supercomputer went online late last year, and it will likely change the way the Air Force and the Air National Guard enable 24-hour, real-time surveillance over a roughly 15-mile-wide area, says Barnell.


A new search tool developed by researchers at Microsoft indexes medical images of the human body, automatically finding organs and other structures, using 3D medical imagery.

CT scans use X-rays to capture many slices through the body that can be combined to create a 3D representation. This is a powerful tool for diagnosis, but it’s difficult to navigate.

The new search tool indexes scanned data and lists the organs it finds at the side of the screen, creating a table of hyperlinks for the body. A user can click on the word “heart” and be presented with a clear view of the organ.

The software uses the pattern of light and dark in the scan to identify particular structures. It was developed by training machine-learning algorithms to recognize features in hundreds of scans in which experts had marked the major organs.


Engineering researchers at the University of Michigan have designed a material system that spontaneously forms nano-size spirals of electric polarization at controllable intervals, says professor Xiaoqing Pan.

This improvement in the performance of ferroelectric materials could provide natural budding sites for polarization switching and thus reduce the power needed to flip the molecular bits used for data storage.

To make this happen, the engineers layered a ferroelectric material on an insulator with crystal lattices that were closely matched. The polarization causes large electric fields at the ferroelectric surface that are responsible for the spontaneous formation of the budding sites, known as “vortex nanodomains.”

Technical applications include memory devices with more storage capacity than magnetic hard drives, faster write speed, and longer lifetimes than flash memory.


A tiny wearable positron emission tomography (PET) scanner has been used to track chemical activity in the brains of unrestrained animals while an animal behaves naturally; it could be modified for people.

By revealing neurological circuitry as the subjects perform normal tasks, researchers say, the technology could greatly broaden the understanding of learning, addiction, depression, and other conditions.

A conventional PET scanner is so large that these studies have to be performed with the subject lying inside a large tube.


Three panelists presented alternate views on the Singularity at the SXSW conference in Austin this week, and blogger Michael Anissimov neatly summarized them. A few excerpts:

Natasha Vita-More: “The very same technology that proposes to build superintelligences could also dramatically enhance human cognition…. The coincidental and subsequent developments of inventive projects arrived at through digital media, virtuality, and immersivity have furthered the scope of human experiential enhancement as artificial intelligence technologies are fostering arguably viable developments. This overlap of computational and physical forms an evolutionary crossing point. Could the human become a super AI?”

Doug Lenat: “The bottleneck is software, not hardware…. The limitations of WATSON, Google, SIRI, etc. are ones of breadth of inference, not quantitative performance metrics…. For many years now, dozens of us have been building CYC, a repository for the common sense knowledge and general inferencing strategies…. We are on the verge of a sort of Singularity in the building of CYC: it now knows enough, and can infer enough, to carry on interactive dialogues in English, opening up the possibility of having millions of people helping it to cross that finish line in 2012.”

Michael Vassar: My work is focused on the exploration and integration of the visions of the Technological Singularity developed by Vernor Vinge, Raymond Kurzweil and Eliezer Yudkowsky…. I associate these visions of the Singularity with three stages in the likely evolution of information processing, the combined impact of which will most likely make the 22nd century resemble the 20th less closely than the 20th century resembles the Cambrian. The earliest, Vingean stage is of particular importance, because the development of superhuman collective intelligences is likely to mark the end of the period during which deliberate human decision making can enable human values to directly influence humanity’s future.”


Researchers at the Georgia Institute of Technology have found that people generally have a positive response toward being touched by a robotic nurse, but that their perception of the robot’s intent makes a significant difference.

In the study, researchers looked at how people responded when a robotic nurse, known as Cody, touched and wiped a person’s forearm. Although Cody touched the subjects in exactly the same way, they reacted more positively when they believed Cody intended to clean their arm versus when they believed Cody intended to comfort them.

The team believes that future research should investigate ways to make robot touch more acceptable to people, especially in healthcare. Future healthcare tasks, such as wound dressing and assisting with hygiene, will likely require a robotic nurse to touch the patient’s body.

The research is being presented March 9 at the Human-Robot Interaction conference in Lausanne, Switzerland.


The U.S. Navy recently issued a proposal to build “a coordinated and distributed swarm of micro-robots” capable of manufacturing “novel materials and structures,” using desktop manufacturing  to “print” 3-D objects.

DARPA has also experimented with “programmable materials,” possible to “mimic the shape and appearance of a person or object being imaged in real time.”


Zite is an iPad app that crawls a half million Web domains to find specific reading material that would be of interest to you, according to your social network and/or online reading behavior.

It evaluates this potential content by tracking signals (like tweets, comments, tags and sharing) from stories that indicate a certain level of social interest and momentum in the story. The result is a personalized magazine that gets more accurately targeted toward its reader the more it’s used.


Artificial intelligence “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost of the traditional platoon of lawyers and paralegals who work for months at high hourly rates.

E-discovery technologies generally fall into two broad categories that can be described as “linguistic” and “sociological.”  The most basic linguistic approach uses specific search words to find and sort relevant documents. More advanced programs filter documents through a large web of word and phrase definitions. A user who types “dog” will also find documents that mention “man’s best friend” and even the notion of a “walk.”

The sociological approach adds an inferential layer of analysis, mimicking the deductive powers of a human Sherlock Holmes.

Engineers and linguists at Cataphora, an information-sifting company based in Silicon Valley, have their software mine documents for the activities and interactions of people — who did what when, and who talks to whom. The software seeks to visualize chains of events. It identifies discussions that might have taken place across e-mail, instant messages and telephone calls.

The Cataphora software can also recognize the sentiment in an e-mail message — whether a person is positive or negative, or what the company calls “loud talking” — unusual emphasis that might give hints that a document is about a stressful situation. The software can also detect subtle changes in the style of an e-mail communication.

Automation of higher-level jobs is accelerating because of progress in computer science and linguistics. Mike Lynch, the founder of Autonomy, is convinced that “legal is a sector that will likely employ fewer, not more, people in the U.S. in the future.”


Can machines surpass humans in intelligence? Watson’s victory in the recent “Jeopardy!” TV show supports that idea, which is suggested in the new film, Transcendent Man, says The Economist.

Alternatively, some technology experts think mankind will transform itself into a fitter, smarter and better-looking species in coming decades, Juan Enriquez and Steve Gullans argue in Homo Evolutis, a new electronic book.


Excessive use of the Internet, cell phones, and other technologies can cause us to become more impatient, impulsive, forgetful and narcissistic according to a new book on “e-personality,” says psychiatrist Elias Aboujaoude, MD, clinical associate professor of psychiatry and behavioral sciences and director of Stanford University’s impulse control and obsessive-compulsive disorder clinics, in a new book, Virtually You: The Dangerous Powers of the E-Personality.

Drawing from his clinical work and personal experience, he discusses the Internet’s psychological impact and how our online traits are unconsciously being imported into our our offline lives.


Scientists are developing 3D “bioprinters” that will be able to print out skin, cartilage, bone, and other body parts.

Professor James Yoo, from the Institute of Regenerative Medicine at Wake Forest University is developing a system that will allow them to print skin directly onto burn wounds. The bioprinter has a built-in laser scanner that scans the wound and determines its depth and area. The scan is converted into three-dimensional digital images that enable the device to calculate how many layers of skin cells need to be printed on the wound to restore it to its original configuration.

Cornell University’s Computational Synthesis Laboratory, Professor Hod Lipson, who demonstrated a bioprinter by printing an ear, working from a scan of a human ear and a computer file containing the three-dimensional coordinates. The ear was printed using silicone gel instead of real human ear cells.


Is it possible that, in the not-so-distant future, we will be able to reshape the human body so as to have extra limbs? A third arm helping us out with chores or assisting a paralyzed person?

Neuroscientists at Karolinska Institutet in Stockholm report a perceptual illusion in which a rubber right hand, placed beside the real hand in full view of the participant, is perceived as a supernumerary limb belonging to the participant’s own body. The effect is verified by physiological evidence obtained from skin conductance responses when physically threatening either the rubber hand or the real one.

The illusion reported here is qualitatively different from the traditional rubber hand illusion. The subjects feel less disownership of the real hand and a stronger feeling of having two right hands. These results suggest that the artificial hand “borrows” some of the multisensory processes that represent the real hand, leading to duplication of touch and ownership of two right arms.

The scientists say this work represents a major advance because it challenges the traditional view of the gross morphology of the human body as a fundamental constraint on what we can come to experience as our physical self, by showing that the body representation can easily be updated to incorporate an additional limb.


The world’s first full-length marathon for two-legged robots kicked off in Japan on Thursday, with the toy-sized humanoids due to run 42.195 kilometers (26 miles) over four days.


Robots might one day trace the origin of their consciousness to recent experiments aimed at instilling them with the ability to reflect on their own thinking.

Although granting machines self-awareness might seem more like the stuff of science fiction than science, there are solid practical reasons for doing so, explains roboticist Hod Lipson at Cornell University’s Computational Synthesis Laboratory.

“The greatest challenge for robots today is figuring out how to adapt to new situations,” he says. “There are millions of robots out there, mostly in factories, and if everything is in the right place at the right time for them, they are superhuman in their precision, in their power, in their speed, in their ability to work repetitively 24/7 in hazardous environments—but if a bolt falls out of place, game over.”

This lack of adaptability “is the reason we don’t have many robots in the home, which is much more unstructured than the factory,” Lipson adds. “The key is for robots to create a model of themselves to figure out what is working and not working in order to adapt.”

So, Lipson and his colleagues developed a robot shaped like a four-legged starfish whose brain, or controller, developed a model of what its body was like. The researchers started the droid off with an idea of what motors and other parts it had, but not how they were arranged, and gave it a directive to move. By trial and error, receiving feedback from its sensors with each motion, the machine used repeated simulations to figure out how its body was put together and evolved an ungainly but effective form of movement all on its own. Then “we removed a leg,” and over time the robot’s self-image changed and learned how to move without it, Lipson says.

Now, instead of having robots modeling their own bodies Lipson and Juan Zagal, now at the University of Chile in Santiago , have developed ones that essentially reflect on their own thoughts. They achieve such thinking about thinking, or metacognition, by placing two minds in one bot. One controller was rewarded for chasing dots of blue light moving in random circular patterns and avoiding red dots as if they were poison, whereas a second controller modeled how the first behaved and whether it was successful or not.

So why might two brains be better than one? The researchers changed the rules so that chasing red dots and avoiding blue dots were rewarded instead. By reflecting on the first controller’s actions, the second one could make changes to adapt to failures—for instance, it filtered sensory data to make red dots seem blue and blue dots seem red, Lipson says. In this way the robot could adapt after just four to 10 physical experiments instead of the thousands it would take using traditional evolutionary robotic techniques.

“This could lead to a way to identify dangerous situations, learning from them without having to physically go through them—that’s something that’s been missing in robotics,” says computer scientist Josh Bongard at the University of Vermont, a past collaborator of Lipson’s who did not take part in this study.

Beyond robots that think about what they are thinking, Lipson and his colleagues are also exploring if robots can model what others are thinking, a property that psychologists call “theory of mind”. For instance, the team had one robot observe another wheeling about in an erratic spiraling manner toward a light. Over time, the observer could predict the other’s movements well enough to know where to lay a “trap” for it on the ground. “It’s basically mind reading,” Lipson says.

“Our holy grail is to give machines the same kind of self-awareness capabilities that humans have,” Lipson says. “This research might also shed new light on the very difficult topic of our self-awareness from a new angle—how it works, why and how it developed.”

PHYSORG SAYS: — We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots will be able to search the web all on their own. Not from the web as we know it, but from a different web made exclusively for the robots.

That web will be called the RoboEarth.

RoboEarth will be a Wiki-style site designed specifically for robots. The site will work something like this. When a completes a task it will be able to upload data related to the task. This data will be available for other robots who require information on the task. A simple download will allow for robots to learn from each other, taking the humans out of the equation.

Data sharing will be all that the RoboEarth does. Much like the web of the early days, it will be purely a data sharing tool. Do not expect to see robot-based auction sites or dating sites in the near future. Then again the robots may not miss those features.

The RoboEarth project is the brainchild of researchers at the Swiss Federal Institute of Technology in Zurich. The RoboEarth system will rely on having a certain amount of standardization between the robots who share the data. Otherwise, they will not be able to share the data universally. Without that standardization, the sharing of data will be very limited.

Since this project is estimated to be completed in about four years there will be some time for robots to get with the program.


The Internet has become not only a tool for disseminating knowledge through scientific publications, but it also has the potential to shape scientific research through expanding the field of metaknowledge — the study of knowledge itself, according to an article published by University of Chicago researchers in the journal Science.

The new possibilities for metaknowledge include developing a better understanding of science’s social context and the biases that can affect research findings and choices of research topics. Pooling research-related information online can shed light on how scientists’ personal backgrounds or funding sources shape their research approaches, and could open up new fields of study, wrote James Evans, assistant professor in sociology at the University of Chicago, and Jacob Foster, a post-doctoral scholar at the University, in an analysis supported with a National Science Foundation grant.

“The computational production and consumption of metaknowledge will allow researchers and policymakers to leverage more scientific knowledge—explicit, implicit, contextual—in their efforts to advance science,” the two wrote in the Perspectives article “Metaknowledge,” published in the Feb. 11 issue of Science. Metaknowledge is essential in a digital era in which so many investigations are linked electronically, they point out.

An important new tool for metaknowledge researchers seeking previously hidden connections is natural language processing, one of the rapidly emerging fields of artificial intelligence. NLP permits machine reading, information extraction and automatic summarization.

Researchers at Google used computational content analysis to identify the emergence of influenza epidemics by identifying and tracking related Google searches. The process was faster than other techniques used by public health officials. These content analysis techniques complement the statistical techniques of meta-analysis, which typically incorporate data from many different studies in an effort to draw a larger conclusion about a research question, such as the influence of class size on student achievement.

For scientific research, meta-analysis can trace the connections between data and conclusions in ways that might not otherwise be noticed. For example, the availability of samples from the Southern Hemisphere related to continental drift has influenced the way in which geologists have made conclusions about plate tectonics.

Metaknowledge also has unveiled the possibility of “ghost theories” — implicit assumptions that may undergird scientific conclusions, even when researchers do not acknowledge them. For example, psychologists frequently use college students as research subjects and accordingly publish papers based on the behavior of a group that may or may not be typical of the entire population. Scholars using traditional metaknowledge techniques found that 67 percent of the papers published in the Journal of Personality and Social Behavior were based on studies of undergraduates. The use of computation could accelerate and widen the discovery of such ghost theories.

Entrenched scientific ideas can develop when studies repeatedly find conclusions that support previous claims by well-known scholars and also when students of distinguished researchers go on to do their own work, which also reinforces previous claims. Both of those trends can be uncovered by scholars working in metaknowledge, Evans and Foster said.

Metaknowledge also helps scholars understand the role funding plays in research. “There is evidence from metaknowledge that embedding research in the private or public sector modulates its path,” they write. “Company projects tend to eschew dogma in an impatient hunt for commercial breakthroughs, leading to rapid but unsystematic accumulation of knowledge, whereas public research focuses on the careful accumulation of consistent results.”

The promise of metaknowedge is its capacity to steer researchers to new fields, they said.

“Metaknowledge could inform individual strategies about research investment, pointing out overgrazed fields where herding leads to diminishing returns as well as lush range where premature certainty has halted promising investigation,” Evans and Foster said.


In the Feb. 10 online issue of Current Biology, a Johns Hopkins team led by neuroscientists Ed Connor and Kechen Zhang describes what appears to be the next step in understanding how the brain compresses visual information down to the essentials.

Most of us are familiar with the idea of image compression in computers. File extensions like “.jpg” or “.png” signify that millions of pixel values have been compressed into a more efficient format, reducing file size by a factor of 10 or more with little or no apparent change in image quality. The full set of original pixel values would occupy too much space in computer memory and take too long to transmit across networks.

The brain is faced with a similar problem. The images captured by light-sensitive cells in the retina are on the order of a megapixel. The brain does not have the transmission or memory capacity to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world.

They found that cells in area “V4,” a midlevel stage in the primate brain’s object vision pathway, are highly selective for image regions containing acute curvature. Experiments by doctoral student Eric Carlson showed that V4 cells are very responsive to sharply curved or angled edges, and much less responsive to flat edges or shallow curves.

To understand how selectivity for acute curvature might help with compression of visual information, co-author Russell Rasquinha (now at University of Toronto) created a computer model of hundreds of V4-like cells, training them on thousands of natural object images. After training, each image evoked responses from a large proportion of the virtual V4 cells — the opposite of a compressed format. And, somewhat surprisingly, these virtual V4 cells responded mostly to flat edges and shallow curvatures, just the opposite of what was observed for real V4 cells.

The results were quite different when the model was trained to limit the number of virtual V4 cells responding to each image. As this limit on responsive cells was tightened, the selectivity of the cells shifted from shallow to acute curvature. The tightest limit produced an eight-fold decrease in the number of cells responding to each image, comparable to the file size reduction achieved by compressing photographs into the .jpeg format. At this level, the computer model produced the same strong bias toward high curvature observed in the real V4 cells.

Why would focusing on acute curvature regions produce such savings? Because, as the group’s analyses showed, high-curvature regions are relatively rare in natural objects, compared to flat and shallow curvature. Responding to rare features rather than common features is automatically economical.

Despite the fact that they are relatively rare, high-curvature regions are very useful for distinguishing and recognizing objects, said Connor, a professor in the Solomon H. Snyder Department of Neuroscience in the School of Medicine, and director of the Zanvyl Krieger Mind/Brain Institute.

“Psychological experiments have shown that subjects can still recognize line drawings of objects when flat edges are erased. But erasing angles and other regions of high curvature makes recognition difficult,” he explained

Brain mechanisms such as the V4 coding scheme described by Connor and colleagues help explain why we are all visual geniuses.

“Computers can beat us at math and chess,” said Connor, “but they can’t match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world.” This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around.


A study appearing Feb. 10 in Science Express calculates the world’s total technological capacity to store, communicate and compute information, part of a Special Online Collection: Dealing with Data.

The study by the USC Annenberg School for Communication & Journalism estimates that in 2007, humankind was able to store 2.9 × 1020 optimally compressed bytes, communicate almost 2 × 1021 bytes, and carry out 6.4 × 1018 instructions per second on general-purpose computers.

  • General-purpose computing capacity grew at an annual rate of 58%.
  • The world’s capacity for bidirectional telecommunication grew at 28% per year, closely followed by the increase in globally stored information (23%).
  • Humankind’s capacity for unidirectional information diffusion through broadcasting channels has experienced comparatively modest annual growth (6%).
  • Telecommunication has been dominated by digital technologies since 1990 (99.9% in digital format in 2007), and the majority of our technological memory has been in digital format since the early 2000s (94% digital in 2007).


A digital mouse brain atlasing framework for sharing data has just been published in the open-access Public Library of Science (PLoS) Computational Biology journal.

Modern brain research generates immense quantities of data across different levels of detail, from gene activity to large-scale structure, using a wide array of methods. Each method has its own type of data and is stored in different databases. Integrating findings across levels of detail and from different databases, for example to find a link between gene expression and disease, is therefore challenging and time consuming. In addition, combining data from multiple types of brain studies provides a basis for new insights and is crucial for the progress of neuroscience research. Far too often, scientific progress is hindered by technical barriers to integrating data from different experiments and laboratories.

A major step in addressing these problems, a standard toolset that allows different types of neuroscience data to be combined and compared, is now available for one of the most important subjects in experimental neuroscience: the mouse, Mus musculus. A paper, “Digital Atlasing and Standardization in the Mouse Brain,” describing the vision and key steps that led to the creation of a digital mouse brain atlasing framework for sharing data has just been published in the Public Library of Science (PLoS) Computational Biology journal.

In this landmark publication, the INCF Digital Atlasing Task Force announces a digital atlasing framework which consists of Waxholm Space (WHS; named in honor of the group’s first meeting location) and a supporting web-based Digital Atlasing Infrastructure (DAI). Together they enable the integration of data from genetic, anatomical and functional imaging studies.

“By enabling researchers to link genetic studies with large-scale brain structure and behavior, we will catalyze both basic and medical neuroscience research — precisely the reason INCF was founded in the first place.” — Dr. Sean Hill, Executive Director, INCF.

Three major online mouse brain resources – the Allen Mouse Brain Atlas, the Edinburgh Mouse Atlas Project, and an effort from UCSD (primarily the Cell Centered Database) – are now integrated with the INCF Digital Atlasing Infrastructure and therefore working together. This interoperability will facilitate future research as well as increase the value of previously acquired data.

WHS and DAI were developed with coordination, organization and funding from the International Neuroinformatics Coordinating Facility (INCF). They are a collaborative project, spanning more than two years, of the now retired INCF Standards in Digital Atlasing Task Force. Since then, new Task Forces have been formed to continue and expand on this work. A more detailed publication of this group’s recommendations can be found in their report, published in September 2009.


In-Stat forecasts that the impact of the new 802.11ac Wi-Fi technology standard (developed to provide Gigabit speeds) will push shipments of 802.11ac-enabled devices from 0 in 2011 to nearly 1 billion by 2015.

“The goal of 802.11ac is to provide data speeds much faster than 802.11n, with speeds of around 1Gbps,” says Frank Dickson, Vice President of Research.

Some of the research findings include:

  • Mobile devices with Wi-Fi will still dominate shipments.  In 2015, shipments of mobile phones with embedded Wi-Fi are projected to approach 800 million.
  • By 2015, In-Stat projects that 100% of mobile hotspot shipments will be 802.11ac-enabled.
  • E-readers Wi-Fi attach rates will increase from 3% in 2009 to 90% by 2015.
  • In 2012, Wi-Fi automotive shipments will reach nearly 20 million.


Engineers and scientists collaborating at Harvard University and the MITRE Corp. have developed and demonstrated the world’s first programmable nanoprocessor.

The groundbreaking prototype computer system, described in a paper appearing Feb. 9 in the journal Nature, represents a significant step forward in the complexity of computer circuits that can be assembled from synthesized nanoscale components.

It also represents an advance because these ultra-tiny nanocircuits can be programmed electronically to perform a number of basic arithmetic and logical functions.

“This work represents a quantum jump forward in the complexity and function of circuits built from the bottom up, and thus demonstrates that this bottom-up paradigm, which is distinct from the way commercial circuits are built today, can yield nanoprocessors and other integrated systems of the future,” says principal investigator Charles M. Lieber, who holds a joint appointment at Harvard’s Department of Chemistry and Chemical Biologyand School of Engineering and Applied Sciences.

The work was enabled by advances in the design and synthesis of nanowire building blocks. These nanowire components now demonstrate the reproducibility needed to build functional electronic circuits, and also do so at a size and material complexity difficult to achieve by traditional top-down approaches.

Moreover, the tiled architecture is fully scalable, allowing the assembly of much larger and ever more functional nanoprocessors.

“For the past 10 to 15 years, researchers working with nanowires, carbon nanotubes, and other nanostructures have struggled to build all but the most basic circuits, in large part due to variations in properties of individual nanostructures,” says Lieber, the Mark Hyman Jr. Professor of Chemistry. “We have shown that this limitation can now be overcome and are excited about prospects of exploiting the bottom-up paradigm of biology in building future electronics.”

An additional feature of the advance is that the circuits in the nanoprocessor operate using very little power, even allowing for their minuscule size, because their component nanowires contain transistor switches that are “nonvolatile.” This means that unlike transistors in conventional microcomputer circuits, once the nanowire transistors are programmed, they do not require any additional expenditure of electrical power for maintaining memory.

“Because of their very small size and very low power requirements, these new nanoprocessor circuits are building blocks that can control and enable an entirely new class of much smaller, lighter-weight electronic sensors and consumer electronics,” says co-author Shamik Das, the lead engineer in MITRE’s Nanosystems Group.

“This new nanoprocessor represents a major milestone toward realizing the vision of a nanocomputer that was first articulated more than 50 years ago by physicist Richard Feynman,” says James Ellenbogen, a chief scientist at MITRE.


The National Human Genome Research Institute (NHGRI) has developed a new vision, published today (Feb. 10) in the journal Nature, for exploring the human genome.

According to Eric Green, NHGRI’s director:

  • Over the next 10 years we will start to see spectacular advances in our understanding of how the genome works, how disease works, and how genomic changes are associated with disease. But truly changing medicine will take more than 10 years.
  • The biggest challenge is in data analysis. We can generate large amounts of data very inexpensively, but that overwhelms our capacity to understand it or make it available to physicians.
  • We also need to start thinking about how to train people, both health-care professionals and scientists, to be facile in bioinformatics.


New research conducted at The Scripps Research Institute shows that the connectome (the road atlas of the brain) undergoes constant revisions as the brain of a young animal develops, with new routes forming and others dropping away in a matter of hours.

Up until now, researchers had focused their work primarily on determining how new connections form and on finding ways to enhance such formation. But Cline’s findings that so many immature connections are removed during development puts greater emphasis on the process of elimination, she says.

“It is possible that some genetic diseases are caused by the inefficient elimination of synapses,” says Cline. For example, individuals with a disease known as fragile X, a leading cause of mental retardation, are thought to have too many synapses, suggesting elimination did not occur properly.


A team of Spanish and Australian researchers have taken a first step towards a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial.

The authors have used interactive exercises in settings with a difficulty level estimated by calculating the so-called “Kolmogorov complexity” (they measure the number of computational resources needed to describe an object or a piece of information). This makes them different from traditional psychometric tests and artificial intelligence tests (such as the Turing test).

The most direct application of this study is in the field of artificial intelligence. Until now there has not been any way of checking whether current systems are more intelligent than the ones in use 20 years ago.

And what is even “more important” is that there were no theories or tools to evaluate and compare future intelligent systems that could demonstrate intelligence greater than human intelligence.


Researchers from North Carolina State University have developed a single “unified” device that can perform both volatile and nonvolatile memory operation, with applications that could improve computer start times and energy efficiency for Internet servers.

“We’ve invented a new device that may revolutionize computer memory,” says Dr. Paul Franzon, a professor of electrical and computer engineering at NC State and co-author of a paper describing the research. “Our device is called a double floating-gate field effect transistor (FET).

Existing nonvolatile memory* used in data storage devices utilizes a single floating gate, which stores charge in the floating gate to signify a 1 or 0 (one bit). By using two floating gates, the device can store a bit in a nonvolatile mode, and/or it can store a bit in a fast, volatile mode, like the normal main memory on your computer.”

The double floating-gate FET could have a significant impact on a number of computer problems. For example, it would allow computers to start immediately, because the computer wouldn’t have to retrieve start-up data from its hard drive — the data could be stored in its main memory.

The new device would also allow “power proportional computing.” For example, Web server farms, such as those used by Google, consume an enormous amount of power — even when there are low levels of user activity — in part because the server farms can’t turn off the power without affecting their main memory.

“The double floating-gate FET would help solve this problem,” Franzon says, “because data could be stored quickly in nonvolatile memory — and retrieved just as quickly. This would allow portions of the server memory to be turned off during periods of low use without affecting performance.”

Franzon also notes that the research team has investigated questions about this technology’s reliability, and that they think the device “can have a very long lifetime, when it comes to storing data in the volatile mode.”

The paper, “Computing with Novel Floating-Gate Devices,” will be published Feb. 10 in IEEE’s Computer. The paper was authored by Franzon; former NC State Ph.D. student Daniel Schinke; former NC State master’s student Mihir Shiveshwarkar; and Dr. Neil Di Spigna, a research assistant professor at NC State. The research was funded by the National Science Foundation.

* Traditionally, there are two types of computer memory devices. Slow memory devices are used in persistent data storage technologies such as flash drives. They allow us to save information for extended periods of time, and are therefore called nonvolatile devices. Fast memory devices allow our computers to operate quickly, but aren’t able to save data when the computers are turned off. The necessity for a constant source of power makes them volatile devices.


Your avatar may be just a virtual identity, but it can also affect how you are in the real world.

“In this world of new media, people spend a lot of time interacting with digital versions of one another.” — Jeremy Bailenson

If you spend a lot of time online, you may even have an electronic alter ego–an avatar. An avatar is a movable image that people design to represent themselves in virtual reality environments or in cyberspace.

“For some reason, I always pick really short people,” says Stanford undergraduate student and avid video gamer Oliver Castaneda.

“I have multiple variations,” says Michelle Del Rosario, another gamer and undergraduate student at the Virtual Human Interaction Lab (VHIL) at Stanford University. “Sometimes I choose to look like a really fun and bubbly character. Sometimes I want to look very serious.”

Sounds like avatars are for fun and games but could avatars actually change us? Jeremy Bailenson thinks so. With support from the National Science Foundation (NSF), he created the VHIL to study, among other things, the power avatars exert on their real world masters.

“As a lab, we’ve gone a bit out on a limb and argued that the reason you have an avatar is because an avatar makes you more human than human. It gives you the ability to do things you could never do in the physical world. You can be 10 years younger. You can swap your gender. You can be 30 pounds heavier or lighter. Any behavior or appearance you can imagine, you can transform your avatar to embody,” explains Bailenson.

Sometimes, avatars are designed to be ideal versions of their creators, and there’s now evidence that the virtual reality persona begins to influence the real life persona.

“Remember, in the virtual world–height, beauty–these things are free. We’ve demonstrated that if I increase the height of your avatar by 10 centimeters, you’ll win a negotiation compared to if I decrease the height of your avatar by 10 centimeters.”

Bailenson gives another example. “I use algorithms to age a 20-year-old undergraduate’s avatar and then I give that undergraduate the opportunity to save money or to spend it frivolously. The undergraduate will put more money in savings as opposed to go out and spend it on partying.”

Your avatar also may affect your fitness. In another test, Del Rosario puts on a head-mounted display that reveals an avatar that looks just like her. As she runs in place, her avatar runs, too, and visibly loses weight. When Del Rosario stands still, her avatar stops, and gets fatter. As you might suspect, it is important that the avatar resemble its creator.

“So, the power comes from seeing yourself in the third person gaining and losing weight in accordance with your own physical behavior,” says Bailenson. “Twenty-four hours later, people exercised more after being exposed to watching themselves run than watching someone else run.”

And, as it turns out, Bailenson and colleagues say we also tend to prefer others who resemble us. The researchers reached that conclusion in 2004 when they subtly morphed students’ faces with those of the presidential candidates. The students favored the hybrid candidate that included their own features.

“Even though nobody consciously detected that their own face was morphed inside the image, people whose face was morphed with Bush were more likely to vote for Bush in terms of their self-report on the survey. People whose face was morphed with Kerry indicated they’d be more likely to vote for Kerry. It’s very powerful stuff,” Bailenson says.

He believes avatars will soon play an even bigger role in our lives online. How we shape our own avatars and how we interact with others could have profound influences on our behavior.

“People like things that are similar to them whether it’s verbally, non-verbally or an appearance. We like people that look like us,” Bailenson explains. “We wanted to ask the big question in a world where I can make myself look more like you–how does that affect my ability to influence you?”

“Yeah,” says Castaneda. “I think we’re just beginning to explore all the potential there for, you know, re-imagining yourself in different worlds.”

And the line between reality and virtual reality gets blurrier every day.


A new type of micro-endoscope developed by Stanford University researchers lets scientists watch nerve cells and blood vessels deep inside the brain of a living animal over days, weeks, or even months.

Dubbed the optical needle, it is 500 to 1,000 microns in diameter.


Until now, scientists have thought that the process of erasing information requires energy (heat dissipation). But a new study by physicists Joan Vaccaro from Griffith University in Queensland, Australia and Stephen Barnett from the University of Strathclyde in Glasgow shows that, theoretically, information can be erased without using any energy at all.

Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.

Basically, instead of heat being exchanged between a qubit and thermal reservoir, discrete quanta of angular momentum are exchanged between a qubit and spin reservoir. The scientists described how repeated logic operations between the qubit’s spin and a secondary spin in the zero state eventually result in both spins reaching the logical zero state. Most importantly, the scientists showed that the cost of erasing the qubit’s memory is given in terms of the quantity defining the logic states, which in this case is spin angular momentum and not energy.


Researchers in Taiwan have developed a vending machine that recommends purchases based on people’s faces, attempts to detect any smartphones, e-readers or tablets the buyer might be carrying, to indicate whether the shopper was equipped to download books, music or films.


An automatic driving system that has just been road-tested for the first time in Sweden would link cars together into road trains or “platoons” to form semi-autonomous convoys under the control of a professional lead driver. The hope is that average road speeds can be reduced, improving fuel consumption and cutting congestion.

Each car is fitted with a navigation and communication system which measures the car’s speed and direction, constantly adjusting them to keep the car within a set distance of the vehicle in front. All commands to steer or change speed come from the driver of the lead vehicle and are carried out automatically.


Scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have coaxed polymers to braid themselves into wispy nanoscale ropes that approach the structural complexity of biological materials.

Their work is the latest development in the push to develop self-assembling nanoscale materials that mimic the intricacy and functionality of nature’s handiwork, but which are rugged enough to withstand harsh conditions such as heat and dryness.

Although still early in the development stage, their research could lead to new applications that combine the best of both worlds. Perhaps they’ll be used as scaffolds to guide the construction of nanoscale wires and other structures. Or perhaps they’ll be used to develop drug-delivery vehicles that target disease at the molecular scale, or to develop molecular sensors and sieve-like devices that separate molecules from one another.

Specifically, the scientists created the conditions for synthetic polymers called polypeptoids to assemble themselves into ever more complicated structures: first into sheets, then into stacks of sheets, which in turn roll up into double helices that resemble a rope measuring only 600 nanometers in diameter (a nanometer is a billionth of a meter).

“This hierarchichal self assembly is the hallmark of biological materials such as collagen, but designing synthetic structures that do this has been a major challenge,” says Ron Zuckermann, who is the Facility Director of the Biological Nanostructures Facility in Berkeley Lab’s Molecular Foundry.

In addition, unlike normal polymers, the scientists can control the atom-by-atom makeup of the ropy structures. They can also engineer helices of specific lengths and sequences. This “tunability” opens the door for the development of synthetic structures that mimic biological materials’ ability to carry out incredible feats of precision, such as homing in on specific molecules.

“Nature uses exact length and sequence to develop highly functional structures. An antibody can recognize one form of a protein over another, and we’re trying to mimic this,” adds Zuckermann.

Zuckermann and colleagues conducted the research at The Molecular Foundry, which is one of the five DOE Nanoscale Science Research Centers premier national user facilities for interdisciplinary research at the nanoscale. Joining him were fellow Berkeley Lab scientists Hannah Murnen, Adrianne Rosales, Jonathan Jaworski, and Rachel Segalman. Their research was published in a recent issue of the Journal of the American Chemical Society.

The scientists worked with chains of bioinspired polymers called a peptoids. Peptoids are structures that mimic peptides, which nature uses to form proteins, the workhorses of biology. Instead of using peptides to build proteins, however, the scientists are striving to use peptoids to build synthetic structures that behave like proteins.

The team started with a block copolymer, which is a polymer composed of two or more different monomers.

“Simple block copolymers self assemble into nanoscale structures, but we wanted to see how the detailed sequence and functionality of bioinspired units could be used to make more complicated structures,” says Rachel Segalman, a faculty scientist at Berkeley Lab and professor of Chemical and Biomolecular Engineering at University of California, Berkeley.

With this in mind, the peptoid pieces were robotically synthesized, processed, and then added to a solution that fosters self assembly.

The result was a variety of self-made shapes and structures, with the braided helices being the most intriguing. The hierarchical structure of the helix, and its ability to be manipulated atom-by-atom, means that it could be used as a template for mineralizing complex structures on a nanometer scale.

“The idea is to assemble structurally complex structures at the nanometer scale with minimal input,” says Hannah Murnen. She adds that the scientists next hope is to capitalize on the fact that they have minute control over the structure’s sequence, and explore how very small chemical changes alter the helical structure.

Says Zuckermann, “These braided helices are one of the first forays into making atomically defined block copolymers. The idea is to take something we normally think of as plastic, and enable it to adopt structures that are more complex and capable of higher function, such as molecular recognition, which is what proteins do really well.”


UCLA neuroscientists have now collaborated with physicists to develop a non-invasive, ultra–high-speed microscope that can record in real time the firing of thousands of individual neurons in the brain as they communicate, or miscommunicate, with each other.

“In our view, this is the world’s fastest two-photon excitation microscope for three-dimensional imaging in vivo,” said UCLA physics professor Katsushi Arisaka, who designed the new optical imaging system with UCLA assistant professor of neurology and neurobiology Dr. Carlos Portera-Cailliau and colleagues.

Their research appears in the Jan. 9 edition of the journal Nature Methods.

Because neuropsychiatric diseases like autism and mental retardation often display no physical brain damage, it’s thought they are caused by conductivity problems — neurons not firing properly. Normal cells have patterns of electrical activity, said Portera-Cailliau, but abnormal cell activity as a whole doesn’t generate relevant information the brain can use.

“One of the greatest challenges for neuroscience in the 21st century is to understand how the billions of neurons that form the brain communicate with one another to produce complex behaviors,” he said. “The ultimate benefit from this type of research will come from deciphering how dysfunctional patterns of activity among neurons lead to devastating symptoms in a variety of neuropsychiatric disorders.”

For the last few years, Portera-Cailliau has been using calcium imaging, a technique that uses fluorescent dyes that are taken up by neurons. When the cells fire, they “blink like lights in a Christmas tree,” he said. “Our role now is to decipher the code that neurons use, which is buried in those blinking light patterns.”

But that technique had its limitations, according to Portera-Cailliau.

“The signal of the calcium-based fluorescent dye we used faded as we imaged deeper into the cortex. We couldn’t image all the cells,” he said.

Another problem was speed. Portera-Cailliau and his colleagues were concerned they were missing information because they couldn’t image a large enough portion of the brain fast enough to measure the group-firing of individual neurons. That was the driving impulse behind the collaboration with Arisaka and one of his graduate students, Adrian Cheng, to find a better way to record neuronal activity faster.

The imaging technology they developed is called multifocal two-photon microscopy with spatio-temporal excitation-emission multiplexing — STEM for short. The researchers modified two-photon laser-scanning microscopes to image fluorescent calcium dyes inside the neurons, and came up with a way to split the main laser beam into four smaller beamlets. This allowed them to record four times as many brain cells as the earlier version, or four times faster. In addition, they used a different beam to record neurons at different depths inside the brain, giving a 3-D effect, which had never been done previously.

“Most video cameras are designed to capture an image at 30 pictures per second. What we did was speed that up by 10 times to roughly 250 pictures per second,” Arisaka said. “And we are working on making it even faster.”

The result, he said, “is a high-resolution three-dimensional video of neuronal circuit activity in a living animal.”

The use of calcium imaging in research is already providing dividends. Portera-Cailliau studies Fragile X syndrome, a form of autism. By comparing the cortex of a normal mouse with a Fragile X mutant mouse, his group has discerned the misfiring that occurs in the Fragile X brain.


Google updated its Translate app for Android devices with Conversation Mode, a user interface for mobile devices designed to allow real-time conversation between two people speaking different languages. The current version supports only English-Spanish translation.


Researchers at the University of Illinois and Northwestern University have demonstrated bio-inspired structures that self-assemble from simple building blocks: spheres.

The helical “supermolecules” are made of tiny colloid balls instead of atoms or molecules. Similar methods could be used to make new materials with the functionality of complex colloidal molecules. The team will publish its findings in the Jan. 14 issue of the journal Science.

“We can now make a whole new class of smart materials, which opens the door to new functionality that we couldn’t imagine before,” said Steve Granick, Founder Professor of Engineering at the University of Illinois and a professor of materials science and engineering, chemistry, and physics.

Granick’s team developed tiny latex spheres, dubbed “Janus spheres,” which attract each other in water on one side, but repel each other on the other side. The dual nature is what gives the spheres their ability to form unusual structures, in a similar way to atoms and molecules.

In pure water, the particles disperse completely because their charged sides repel one another. However, when salt is added to the solution, the salt ions soften the repulsion so the spheres can approach sufficiently closely for their hydrophobic ends to attract. The attraction between those ends draws the spheres together into clusters.

At low salt concentrations, small clusters of only a few particles form. At higher levels, larger clusters form, eventually self-assembling into chains with an intricate helical structure.

“Just like atoms growing into molecules, these particles can grow into supracolloids,” Granick said. “Such pathways would be very conventional if we were talking about atoms and molecules reacting with each other chemically, but people haven’t realized that particles can behave in this way also.”

The team designed spheres with just the right amount of attraction between their hydrophobic halves so that they would stick to one another but still be dynamic enough to allow for motion, rearrangement, and cluster growth.

“The amount of stickiness really does matter a lot. You can end up with something that’s disordered, just small clusters, or if the spheres are too sticky, you end up with a globular mess instead of these beautiful structures,” said graduate student Jonathan Whitmer, a co-author of the paper.

One of the advantages of the team’s supermolecules is that they are large enough to observe in real time using a microscope. The researchers were able to watch the Janus spheres come together and the clusters grow – whether one sphere at a time or by merging with other small clusters – and rearrange into different structural configurations the team calls isomers.

“We design these smart materials to fall into useful shapes that nature wouldn’t choose,” Granick said.

Surprisingly, theoretical calculations and computer simulations by Erik Luijten, Northwestern University professor of materials science and engineering and of engineering sciences and applied mathematics, and Whitmer, a student in his group, showed that the most common helical structures are not the most energetically favorable. Rather, the spheres come together in a way that is the most kinetically favorable – that is, the first good fit that they encounter.

Next, the researchers hope to continue to explore the colloid properties with a view toward engineering more unnatural structures. Janus particles of differing sizes or shapes could open the door to building other supermolecules and to greater control over their formation.

“These particular particles have preferred structures, but now that we realize the general mechanism, we can apply it to other systems – smaller particles, different interactions – and try to engineer clusters that switch in shape,” Granick said.


The fruit fly has evolved a method for arranging the tiny, hair-like structures it uses to feel and hear the world that’s so efficient a team of scientists in Israel and at Carnegie Mellon University says it could be used to more effectively deploy wireless sensor networks and other distributed computing applications.

With a minimum of communication and without advance knowledge of how they are connected with each other, the cells in the fly’s developing nervous system manage to organize themselves so that a small number of cells serve as leaders that provide direct connections with every other nerve cell, said author Ziv Bar-Joseph, associate professor of machine learning at Carnegie Mellon University.

The result, the researchers report in the Jan. 14 edition of the journal Science, is the same sort of scheme used to manage the distributed computer networks that perform such everyday tasks as searching the Web or controlling an airplane in flight. But the method used by the fly’s nervous system to organize itself is much simpler and more robust than anything humans have concocted.

“It is such a simple and intuitive solution, I can’t believe we did not think of this 25 years ago,” said co-author Noga Alon, a mathematician and computer scientist at Tel Aviv University and the Institute for Advanced Study in Princeton, N.J.

Bar-Joseph, Alon and their co-authors — Yehuda Afek of Tel Aviv University and Naama Barkai, Eran Hornstein and Omer Barad of the Weizmann Institute of Science in Rehovot, Israel — used the insights gained from fruit flies to design a new distributed computing algorithm. They found it has qualities that make it particularly well suited for networks in which the number and position of the nodes is not completely certain. These include wireless sensor networks, such as environmental monitoring, where sensors are dispersed in a lake or waterway, or systems for controlling swarms of robots.

“Computational and mathematical models have long been used by scientists to analyze biological systems,” said Bar-Joseph, a member of the Lane Center for Computational Biology in Carnegie Mellon’s School of Computer Science. “Here we’ve reversed the strategy, studying a biological system to solve a long-standing computer science problem.”

Today’s large-scale computer systems and the nervous system of a fly both take a distributive approach to performing tasks. Though the thousands or even millions of processors in a computing system and the millions of cells in a fly’s nervous system must work together to complete a task, none of the elements need to have complete knowledge of what’s going on, and the systems must function despite failures by individual elements.

In the computing world, one step toward creating this distributive system is to find a small set of processors that can be used to rapidly communicate with the rest of the processors in the network — what graph theorists call a maximal independent set (MIS). Every processor in such a network is either a leader (a member of the MIS) or is connected to a leader, but the leaders are not interconnected.

A similar arrangement occurs in the fruit fly, which uses tiny bristles to sense the outside world. Each bristle develops from a nerve cell, called a sensory organ precursor (SOP), which connects to adjoining nerve cells, but does not connect with other SOPs.

For three decades, computer scientists have puzzled over how processors in a network can best elect an MIS. The common solutions use a probabilistic method — similar to rolling dice — in which some processors identify themselves as leaders, based in part on how many connections they have with other processors. Processors connected to these self-selected leaders take themselves out of the running and, in subsequent rounds, additional processors self-select themselves and the processors connected to them take themselves out of the running. At each round, the chances of any processor joining the MIS (becoming a leader) increases as a function of the number of its connections.

This selection process is rapid, Bar-Joseph said, but it entails lots of complicated messages being sent back and forth across the network, and it requires that all of the processors know in advance how they are connected in the network. That can be a problem for applications such as wireless sensor networks, where sensors might be distributed randomly and all might not be within communication range of each other.

During the larval and pupal stages of a fly’s development, the nervous system also uses a probabilistic method to select the cells that will become SOPs. In the fly, however, the cells have no information about how they are connected to each other. As various cells self-select themselves as SOPs, they send out chemical signals to neighboring cells that inhibit those cells from also becoming SOPs. This process continues for three hours, until all of the cells are either SOPs or are neighbors to an SOP, and the fly emerges from the pupal stage.

In the fly, Bar-Joseph noted, the probability that any cell will self-select increases not as a function of connections, as in the typical MIS algorithm for computer networks, but as a function of time. The method does not require advance knowledge of how the cells are arranged. The communication between cells is as simple as can be.

The researchers created a computer algorithm based on the fly’s approach and proved that it provides a fast solution to the MIS problem. “The run time was slightly greater than current approaches, but the biological approach is efficient and more robust because it doesn’t require so many assumptions,” Bar-Joseph said. “This makes the solution applicable to many more applications.”


InteraXon will be showing two new applications for its mind-control technology at CES. It uses headphones equipped with a pair of sensors that sit against the user’s left ear and forehead, forming a circuit that detects electrical signals from the brain.


University of Missouri and Columbia University researchers have found a way to create biological joints in animals, and they believe biological joint replacements for humans, using a patient’s own cells, aren’t far away.

James Cook, a researcher in the MU College of Veterinary Medicine and Dept of Orthopaedic Surgery participated on a research team that created new cartilage in animals using a biological “scaffold” in the animals’ joints. Cook assisted with the implant design and performed the surgeries to implant the biologic joint replacements. The study was led by Jeremy Mao of Columbia University.

The scaffold was implanted in rabbits with a surgical technique currently used for shoulder replacement in humans. The surgery removes the entire humeral head, or the ball part of the ball-and-socket shoulder joints. The scaffolds are infused with a growth factor, which encourages the host’s own cells, including stem cells, to become cartilage and bone cells. The advantage to this technique is that it avoids the need to harvest and implant cells, which requires multiple surgeries.

“The device was designed with both biological and mechanical factors in mind,” Cook said. “It is unique in design and composition and in how it stimulates the body’s own cells. This is the first time we have seen cartilage regeneration using this type of scaffold.”

The study found that the rabbits given the infused scaffolds resumed weight-bearing and functional use of their limbs faster and more consistently than those without. Four months later, cartilage had formed in the scaffolds, creating a new, functional cartilage surface for the humeral head. The team observed no complications or adverse events after surgery; the new tissue regeneration was associated with excellent limb use and shoulder health, indicating the procedure is both safe and effective. Cook, who also was involved in the study design and data analysis, said the next step toward FDA approval and clinical use is to study the technique in larger animals.

“If we continue to prove the safety and efficacy of this biologic joint replacement strategy, then we can get FDA approval for use of this technology for joint replacements in people,” Cook said. “We are still in the early phases of this process, but this study gives a big boost to its feasibility.”

“We are continuing our concerted efforts in this arena,” Cook said. “Our goal at Mizzou’s Comparative Orthopaedic Laboratory is to do away with metal and plastic joints, and instead, regenerate a fully functional biologic joint for everyone who needs one. We think this is the future of orthopaedics and we hope that future is starting here and now.”

KURZWEILAI SAYS: In a groundbreaking achievement that could help scientists “build” new biological systems, Princeton University scientists have constructed for the first time artificial proteins that enable the growth of living cells.

The team of researchers created genetic sequences never before seen in nature, and the scientists showed that they can produce substances that sustain life in cells almost as readily as proteins produced by nature’s own toolkit.

“What we have here are molecular machines that function quite well within a living organism even though they were designed from scratch and expressed from artificial genes,” said Michael Hecht, a professor of chemistry at Princeton, who led the research. “This tells us that the molecular parts kit for life need not be limited to parts — genes and proteins — that already exist in nature.”

The work, Hecht said, represents a significant advance in synthetic biology, an emerging area of research in which scientists work to design and fabricate biological components and systems that do not already exist in the natural world. One of the field’s goals is to develop an entirely artificial genome composed of unique patterns of chemicals.

“Our work suggests,” Hecht said, “that the construction of artificial genomes capable of sustaining cell life may be within reach.”

Nearly all previous work in synthetic biology has focused on reorganizing parts drawn from natural organisms. In contrast, Hecht said, the results described by the team show that biological functions can be provided by macromolecules that were not borrowed from nature, but designed in the laboratory.

Although scientists have shown previously that proteins can be designed to fold and, in some cases, catalyze reactions, the Princeton team’s work represents a new frontier in creating these synthetic proteins.

The research, which Hecht conducted with three former Princeton students and a former postdoctoral fellow, is described in a report published online Jan. 4 in the journal Public Library of Science ONE.

Hecht and the students in his lab study the relationship between biological processes on the molecular scale and processes at work on a larger magnitude. For example, he is studying how the errant folding of proteins in the brain can lead to Alzheimer’s disease, and is involved in a search for compounds to thwart that process. In work that relates to the new paper, Hecht and his students also are interested in learning what processes drive the routine folding of proteins on a basic level — as proteins need to fold in order to function — and why certain key sequences have evolved to be central to existence. Proteins are the workhorses of organisms, produced from instructions encoded into cellular DNA. The identity of any given protein is dictated by a unique sequence of 20 chemicals known as amino acids. If the different amino acids can be viewed as letters of an alphabet, each protein sequence constitutes its own unique “sentence.”

And, if a protein is 100 amino acids long (most proteins are even longer), there are an astronomically large number of possibilities of different protein sequences, Hecht said. At the heart of his team’s research was to question how there are only about 100,000 different proteins produced in the human body, when there is a potential for so many more. They wondered, are these particular proteins somehow special? Or might others work equally well, even though evolution has not yet had a chance to sample them?

Hecht and his research group set about to create artificial proteins encoded by genetic sequences not seen in nature. They produced about 1 million amino acid sequences that were designed to fold into stable three-dimensional structures.

“What I believe is most intriguing about our work is that the information encoded in these artificial genes is completely novel — it does not come from, nor is it significantly related to, information encoded by natural genes, and yet the end result is a living, functional microbe,” said Michael Fisher, a co-author of the paper who earned his Ph.D. at Princeton in 2010 and is now a postdoctoral fellow at the University of California-Berkeley. “It is perhaps analogous to taking a sentence, coming up with brand new words, testing if any of our new words can take the place of any of the original words in the sentence, and finding that in some cases, the sentence retains virtually the same meaning while incorporating brand new words.”

Once the team had created this new library of artificial proteins, they inserted those proteins into various mutant strains of bacteria in which certain natural genes previously had been deleted. The deleted natural genes are required for survival under a given set of conditions, including a limited food supply. Under these harsh conditions, the mutant strains of bacteria died — unless they acquired a life-sustaining novel protein from Hecht’s collection. This was significant because formation of a bacterial colony under these selective conditions could occur only if a protein in the collection had the capacity to sustain the growth of living cells.

In a series of experiments exploring the role of differing proteins, the scientists showed that several different strains of bacteria that should have died were rescued by novel proteins designed in the laboratory. “These artificial proteins bear no relation to any known biological sequences, yet they sustained life,” Hecht said.

Added Kara McKinley, also a co-author and a 2010 Princeton graduate who is now a Ph.D. student at the Massachusetts Institute of Technology: “This is an exciting result, because it shows that unnatural proteins can sustain a natural system, and that such proteins can be found at relatively high frequency in a library designed only for structure.”

The research was funded by the National Science Foundation.


High-resolution, low-cost cameras are proliferating, found in products like smartphones and laptop computers. The cost of storing images is dropping, and new software algorithms for mining, matching and scrutinizing the flood of visual data are progressing swiftly.

People will increasingly be surrounded by machines that can not only see but also reason about what they are seeing, in their own limited way.

The challenge arises from the prospect of the rapid spread of less-expensive yet powerful computer-vision technologies.

At work or school, the technology opens the door to a computerized supervisor that is always watching. Are you paying attention, goofing off or daydreaming? In stores and shopping malls, smart surveillance could bring behavioral tracking into the physical world.

More subtle could be the effect of a person knowing that he is being watched — and how that awareness changes his thinking and actions. It could be beneficial: a person thinks twice and a crime goes uncommitted. But might it also lead to a society that is less spontaneous, less creative, less innovative?


Gordon Bell‘s vision for the future is of software that will let you sort and sift through your digital memories to uncover patterns you would never have gleaned unaided.

You will need lifelogging hardware: a discreet camera permanently slung around your neck that can take photos at regular intervals, and a GPS device to record where you are at any time. Your phone calls, conversations and meetings will need to be digitally captured, all your emails stored, and every web page you look at downloaded. Then you will need to scan in any paper documents that head your way and refuse any books unless they are available on an e-reader.

Work, leisure and spending habits, the pattern of emotional response in various situations and around certain people, the numerous subtle factors affecting your mental well-being and physical health — just about anything you care to know about yourself — can be chronicled, condensed, cross-correlated and plotted out.


A new study by a Northeastern University researchers indicates that the amygdala in the human brain appears to play an important role in social life among adult humans.

Their finding, published in the journal Nature Neuroscience, provides insight into how abnormalities in regions of the brain may affect social behavior in neurologic and psychiatric disorders.

The interdisciplinary study, led by Distinguished Professor of Psychology Lisa Feldman Barrett, advances Northeastern’s research mission to solve societal issues, with a focus on global challenges in health, security, and sustainability.

“We know that primates who live in larger social groups have a larger amygdala, even when controlling for overall brain size and body size,” said Barrett. “We considered a single primate species, humans, and found that the amygdala volume positively correlated with the size and complexity of social networks in adult humans.”

The researchers asked 58 participants to complete standard questionnaires that reported on the size and the intricacies of their social networks. They measured the number of regular contacts each participant maintained, as well the number of social groups to which these contacts belonged.

Participants also had a magnetic resonance imaging brain scan to gather information about various brain structures, including the volume of the amygdala. The authors found that individuals with larger amygdala reported larger and more complex social networks. This link was observed for both older and younger individuals, and for both men and women.

Barrett noted that the study findings are consistent with the “social brain hypothesis,” which suggests that the human amygdala might have evolved partially to deal with an increasingly complex social life.

Exploratory analysis of other structures deep within the brain indicates that the amygdala is the only area with compelling evidence of affecting social life in humans.


Scientists at the University of Glasgow and the University of Massachusetts Lowell have created an ultra-fast 1,000-core computer processor.

They used a field programmable gate array (FPGA) chip, which can be configured into specific circuits by the user,  enabling the researchers to divide up the transistors within the chip into small groups and ask each to perform a different task.

By creating more than 1,000 mini-circuits within the FPGA chip, the researchers effectively turned the chip into a 1,000-core processor – each core working on its own instructions.

The researchers then used the chip to process an algorithm that is central to the MPEG movie format — used in YouTube videos  –  at a speed of five gigabytes per second: around 20 times faster than current top-end desktop computers.

“FPGAs are not used within standard computers because they are fairly difficult to program, but their processing power is huge while their energy consumption is very small because they are so much quicker, so they are also a greener option,” said researcher Dr. Wim Vanderbauwhede.

While most computers sold today now contain more than one processing core, which allows them to carry out different processes simultaneously, traditional multi-core processors must share access to one memory source, which slows the system down. The scientists in this research were able to make the processor faster by giving each core a certain amount of dedicated memory.

Vanderbauwhede, who hopes to present his research at the International Symposium on Applied Reconfigurable Computing in March 2011, added: “This is very early proof-of-concept work where we’re trying to demonstrate a convenient way to program FPGAs so that their potential to provide very fast processing power could be used much more widely in future computing and electronics.

“While many existing technologies currently make use of FPGAs, including plasma and LCD televisions and computer network routers, their use in standard desktop computers is limited.

“However, we are already seeing some microchips which combine traditional CPUs with FPGA chips being announced by developers, including Intel and ARM. I believe these kinds of processors will only become more common and help to speed up computers even further over the next few years.”


Your digital camera may embed metadata into photographs with the camera’s serial number or your location. Your printer may be incorporating a secret code on every page it prints which could be used to identify the printer and potentially the person who used it. If Apple puts a particularly creepy patent it has recently applied for into use, you can look forward to a day when your iPhone may record your voice, take a picture of your location, record your heartbeat, and send that information back to the mothership.

This is traitorware: devices that act behind your back to betray your privacy, says EFF.


In the field of connectomics, the goal is to find how memories, personality traits and skills are stored in the brain.

A connectome would provide a far more detailed look at the brain’s inner workings than current techniques that measure blood flow in certain regions. The researchers contend that it would literally show how people are wired and illuminate differences in the brains of people with mental illness.

“The world is not yet ready for the million-petabyte data set the human brain would be,” Dr. Jeff Lichtman of Harvard said. “But it will be.”


The emergence of AI will transform the Internet industry and social networking over the next decade, says Yury Milner, Russia’s leading web tycoon, in an interview in Russian business daily Vedomosti.

“I think that in 10 years if you ask a question on a social network and you get an answer you will not know if a computer or a person has answered you,” he said. “When you receive a question, you will not know if it has been asked by a person or an artificial intelligence. And by answering you help the computer create an algorithm.”


Besides being installed in everything from automatic teller machines and airport check-in kiosks to pacemakers and ocean monitoring sensors, microchips also are going into a staggering array of once decidedly low-tech items — from grave-stone markers and running shoes to fish lures and writing pens.

These embedded devices are becoming “more programmable, they’re getting faster, and they’re getting communications functions built into them,” said Jordan Selburn, principal analyst at research firm iSuppli.

The market for these embedded chips is estimated at $150 billion or more. But that’s expected to grow as companies increasingly explore ways to make common household appliances and other products access the Internet, communicate with one another, and perform new functions for consumers


According to a new report released by Forrester research on Monday, the technology behind augmented reality apps has improved enough so that as information becomes “ultra-accurate and delivered in a perfectly seamless way,” these apps may well become an integral part of using a mobile phone, augmenting real life with broad strokes of information and commentary.

We can expect to see more augmented reality apps in consumer shopping, the report says. You could, for example, hold your phone over a blouse you want to buy, and see comments from other shoppers, get a product coupon or a price deal, or even find out what kind of fabric the blouse is made of.


French robotics company Gostai has unveiled a mobile robot called Jazz Connect, designed for “telepresence and telesurveillance.”

The waist-high robot, which a user can remote-control using a web-based interface, rolls on two wheels and has a head that can move in any direction, with a camera on its forehead. The price starts at 7900 euros.

A web-based remote control interface allows for driving the robot just by clicking with the mouse anywhere on the video feed.

The Anybots QB and the Willow Garage Texai are similar products.

In terms of limitations, the robot is short — only 1 meter tall — and lacks a LCD screen to allow people interacting with the robot to see the face of the remote operator.


Stephen Feeney at University College London and colleagues say they’ve found tentative evidence of four collisions with other universes in the form of circular patterns in the cosmic microwave background.

In their model of the universe, called “eternal inflation,”  the universe we see is merely a bubble in a much larger cosmos. This cosmos is filled with other bubbles, all of which are other universes where the laws of physics may be dramatically different from ours.


IBM and America’s Favorite Quiz show Jeopardy! today announced that an IBM computing system named “Watson” will compete on Jeopardy! against the show’s two most successful and celebrated contestants — Ken Jennings and Brad Rutter.

The first-ever man vs. machine Jeopardy! competition will air on February 14, 15 and 16, 2011, with two matches being played over three consecutive days.

Watson, named after IBM founder Thomas J. Watson, was built by a team of IBM scientists who set out to accomplish a grand challenge – build a computing system that rivals a human’s ability to answer questions posed in natural language with speed, accuracy and confidence. The Jeopardy! format provides the ultimate challenge because the game’s clues involve analyzing subtle meaning, irony, riddles, and other complexities in which humans excel and computers traditionally do not.

Watson is a breakthrough human achievement in the scientific field of Question and Answering, also known as “QA.” The Watson software is powered by an IBM POWER7 server optimized to handle the massive number of tasks that Watson must perform at rapid speeds to analyze complex language and deliver correct responses to Jeopardy! clues. The system incorporates a number of proprietary technologies for the specialized demands of processing an enormous number of concurrent tasks and data while analyzing information in real time.

Competing against Watson will be two of the most celebrated players ever to appear onJeopardy! Ken Jennings broke the Jeopardy! record for the most consecutive games played by winning 74 games in a row during the 2004-2005 season, resulting in winnings of more than $2.5 million. Brad Rutter won the highest cumulative amount ever by a single Jeopardy! player, earning $3,255,102. The total amount is a combination of Rutter’s original appearance in 2002, plus three Tournament wins:  the “Tournament of Champions” and the “Million Dollar Masters Tournament” in 2002 and the “Ultimate Tournament of Champions” in 2005.

The grand prize for this competition will be $1 million with second place earning $300,000 and third place $200,000. Rutter and Jennings will donate 50 percent of their winnings to charity and IBM will donate 100 percent of its winnings to charity.

“After four years, our scientific team believes that Watson is ready for this challenge based on its ability to rapidly comprehend what the Jeopardy! clue is asking, analyze the information it has access to, come up with precise answers, and develop an accurate confidence in its response,” said Dr. David Ferrucci, the scientist leading the IBM Research team that has created Watson. “Beyond our excitement for the match itself, our team is very motivated by the possibilities that Watson’s breakthrough computing capabilities hold for building a smarter planet and helping people in their business tasks and personal lives.”

“We’re thrilled that Jeopardy! is considered a benchmark of ultimate knowledge,” said Harry Friedman, Executive Producer of Jeopardy!. “Performing well on Jeopardy! requires a combination of skills, and it will be fascinating to see whether a computer can compete against arguably the two best Jeopardy! players ever.”

Jeopardy!, the winner of 28 Emmy awards since its syndicated debut in 1984, is in the Guinness Book of World Records for the most awards won by a TV Game Show. The series is the #1-rated quiz show in syndication with nearly 9 million daily viewers. Jeopardy! is produced by Sony Pictures Television, a Sony Pictures Entertainment Company. It is distributed domestically by CBS Television Distribution and internationally by CBS Television International, both units of CBS Corp.

Beyond Jeopardy!, the technology behind Watson can be adapted to solve problems and drive progress in various fields. The computer has the ability to sift through vast amounts of data and return precise answers, ranking its confidence in its answers. The technology could be applied in areas such as healthcare, to help accurately diagnose patients, to improve online self-service help desks, to provide tourists and citizens with specific information regarding cities, prompt customer support via phone, and much more.


Taiwanese researchers have managed to bar code some 16,000 of the 100,000 neurons in a fruit fly’s brain and to reconstruct the brain’s wiring map. Biologists see this atlas of the fly brain as a first step toward understanding the human brain.

The team describes the general architecture of the fly’s brain as composed of 41 local processing units, 58 tracts that link the units to other parts of the brain, and six hubs.

Each neuron is given a bar code with the coordinates of where its cell nucleus lies within the standard Drosophila brain, as well as information about which other parts of the brain the neuron connects to, and which kind of chemical transmitter it uses. With 16,000 images in hand, Dr. Chiang’s team was able to analyze the general architecture of the female fruit fly’s brain.

The fly brain, with its 100,000 neurons, may prove a better starting point for understanding the human brain, which has an estimated 100 billion neurons, each with about 1,000 synapses.


Google is working on a service using “contextual discovery” for pushing information out to people before they’ve started to look for it, based on factors such as their web browsing history or current location.


IBM scientists Wednesday unveiled a new chip technology that integrates electrical and optical devices on the same piece of silicon, enabling computer chips to communicate using pulses of light (instead of electrical signals), resulting in smaller, faster and more power-efficient chips than is possible with conventional technologies.

The new technology, called CMOS Integrated Silicon Nanophotonics, is the result of a decade of development at IBM’s global Research laboratories.

The patented technology will change and improve the way computer chips communicate — by integrating optical devices and functions directly onto a silicon chip, enabling over 10X improvement in integration density than is feasible with current manufacturing techniques.

IBM anticipates that Silicon Nanophotonics will dramatically increase the speed and performance  between chips, and further the company’s ambitious Exascale computing program, which is aimed at developing a supercomputer that can perform one million trillion calculations—or an Exaflop—in a single second. An Exascale supercomputer will be approximately one thousand times faster than the fastest machine today.

“The development of the Silicon Nanophotonics technology brings the vision of on-chip optical interconnections much closer to reality,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “With optical communications embedded into the processor chips, the prospect of building power-efficient computer systems with performance at the Exaflop level is one step closer to reality.”

In addition to combining electrical and optical devices on a single chip, the new IBM technology can be produced on the front-end of a standard CMOS manufacturing line and requires no new or special tooling. With this approach, silicon transistors can share the same silicon layer with silicon nanophotonics devices. To make this approach possible, IBM researchers have developed a suite of integrated ultra-compact active and passive silicon nanophotonics devices that are all scaled down to the diffraction limit – the smallest size that dielectric optics can afford.

“Our CMOS Integrated Nanophotonics breakthrough promises unprecedented increases in silicon chip function and performance via ubiquitous low-power optical communications between racks, modules, chips or even within a single chip itself,” said Dr. Yurii A. Vlasov, Manager of the Silicon Nanophotonics Department at IBM Research. “The next step in this advancement is to establishing manufacturability of this process in a commercial foundry using IBM deeply scaled CMOS processes.”

By adding just a few more processing modules to a standard CMOS fabrication flow, the technology enables a variety of silicon nanophotonics components, such as: modulators, germanium photodetectors and ultra-compact wavelength-division multiplexers to be integrated with high-performance analog and digital CMOS circuitry.  As a result, single-chip optical communications transceivers can now be manufactured in a standard CMOS foundry, rather than assembled from multiple parts made with expensive compound semiconductor technology.

The density of optical and electrical integration demonstrated by IBM’s new technology is unprecedented – a single transceiver channel with all accompanying optical and electrical circuitry occupies only 0.5mm2 – 10 times smaller than previously announced by others. The technology is amenable for building single-chip transceivers with area as small as 4x4mm2 that can receive and transmit over Terabits per second that is over a trillion bits per second.

The research history

The development of CMOS Integrated Silicon Nanophotonics  is the culmination of a series of related advancements by IBM Research that resulted in the development of deeply scaled front-end integrated Nanophotonics components for optical communications. These milestones include:

The details and results of this research effort will be reported in a presentation delivered by Dr. Yurii Vlasov at the major international semiconductor industry conference SEMICON held in Tokyo on the December 1, 2010. The talk is entitled “CMOS Integrated Silicon Nanophotonics: Enabling Technology for Exascale Computational Systems” co-authored by William Green, Solomon Assefa, Alexander Rylyakov, Clint Schow, Folkert Horst, and Yurii Vlasov of IBM’s T.J. Watson Research Center in Yorktown Heights, N.Y. and IBM Zurich Research Lab in Rueschlikon, Switzerland.


A startup called Recorded Future has developed a tool that scrapes real-time data from the Internet to find hints of what will happen in the future. Now the company has offered a glimpse of how its technology works.

Conventional search engines like Google use links to rank and connect different Web pages. Recorded Future’s software goes a level deeper by analyzing the content of pages to track the “invisible” connections between people, places, and events described online.

That is done using a constantly updated index of “streaming data,” including news articles, filings with government regulators, Twitter updates, and transcripts from earnings calls or political and economic speeches. Recorded Future uses linguistic algorithms to identify specific types of events, such as product releases, mergers, or natural disasters, the date when those events will happen, and related entities such as people, companies, and countries. The tool can also track the sentiment of news coverage about companies, classifying it as either good or bad.


Singapore and European research organizations are working together to build what is essentially a single-molecule processor chip. As a comparison, a thousand of such molecular chips could fit into one of today’s microchips.

The ambitious project, termed Atomic Scale and Single Molecule Logic Gate Technologies (ATMOL), will establish a new process for making a complete molecular chip. This means that computing power can be increased significantly but take up only a small fraction of the space that is required by today’s standards.

The fabrication process involves the use of three unique ultra high vacuum (UHV) atomic scale interconnection machines which build the chip atom-by-atom. These machines physically move atoms into place one at a time at cryogenic temperatures.


Advances in artificial intelligence and computer modeling are allowing therapists to practice “cybertherapy” more effectively, using virtual environments to help people work through phobias, like a fear of heights or of public spaces.

Researchers are populating digital worlds with autonomous, virtual humans that can evoke the same tensions as in real-life encounters. People with social anxiety are struck dumb when asked questions by a virtual stranger. Heavy drinkers feel strong urges to order something from a virtual bartender, while gamblers are drawn to sit down and join a group playing on virtual slot machines.

In a recent study,  researchers at USC found that a virtual confidant elicits from people the crucial first element in any therapy: self-disclosure.The researchers are incorporating the techniques learned from this research into a virtual agent being developed for the Army, called SimCoach. Guided by language-recognition software, SimCoach — there are several versions, male and female, young and older, white and black — appears on a computer screen and can conduct a rudimentary interview, gently probing for possible mental troubles.

And research at the University of Quebec suggests where virtual humans are headed: realistic three-dimensional forms that can be designed to resemble people in the real world.


Eoplex has developed 3D-printing manufacturing techniques for printing sub-micron-size voxels (3D pixels) to mass produce 3D objects. After some simple, but secret, processing, this stuff turns into metal, ceramics, and empty spaces.

The result can be miniature machines with moving parts, metamaterials-enabled multi-function antennas, piezoelectric powered energy harvesters, coin-sized hydrogen fuel cells, pretty much anything.


Weill Cornell Medical College researchers have built a new type of prosthetic retina that enabled blind mice to see nearly normal images. It could someday restore detailed sight to the millions of people who’ve lost their vision to retinal disease. 

They used optogenetics, a recently developed technique that infuses neurons with light-sensitive proteins from blue-green algae, causing them to fire when exposed to light.

The researchers used mice that were genetically engineered to express one of these proteins, channelrhodopsin, in their ganglion cells. Then, they presented the mice with an image that had been translated into a grid of 6,000 pulsing lights. Each light communicated with a single ganglion cell, and each pulse of light caused its corresponding cell to fire, thus transmitting the encoded image along to the brain.

In humans, such a setup would require a pair of high-tech spectacles, embedded in which would be a tiny camera, an encoder chip to translate images from the camera into the retinal code, and a miniature array of thousands of lights. When each light pulsed, it would trigger a channelrhodopsin-laden ganglion cell. Surgery would no longer be required to implant an electron array deep into the eye, although some form of gene therapy would be required in order for patients to express channelrhodopsin in their retinas.


New Jersey Institute of Techonology researchers used a robotic glove, together with various video games, to train four volunteers who had suffered stroke and lost near-total movement of their upper limbs.

All four participants showed improvements in movement after the task.


Imagine a computer equipped with shock-proof memory that’s 100,000 times faster and consumes less power than current hard disks. EPFL Professor Mathias Kläui has invented a new kind of “Racetrack” memory, a high-volume, ultra-rapid read-write magnetic memory that may soon make such a device possible.

Hard disks are cheap and can store enormous quantities of data, but they are slow; every time a computer boots up, 2-3 minutes are lost while information is transferred from the hard disk into RAM (random access memory). The global cost in terms of lost productivity and energy consumption runs into the hundreds of millions of dollars a day.

Kläui’s solution involves data recorded on nickel-iron nanowire, a million times smaller than the classic magnetic tape. And unlike a magnetic videotape, in this system nothing moves mechanically. The bits of information stored in the wire are simply pushed around using a spin polarized current, attaining the breakneck speed of several hundred meters per second in the process. It’s like reading an entire VHS cassette in less than a second.

How it works

For this concept to be feasible, each bit of information must be clearly separated from the next so that the data can be read reliably. Kläui overcame this hurdle by using vortices to create magnetic walls between the bits. Using this solution on a 30 nm thick iron-nickel disk, Kläui and his colleagues recorded even higher reading speeds than expected. Their results were published online October 25, 2010, in the journal Physical Review Letters.

Scientists at the IBM Research Center in Zurich (which is developing a racetrack memory) have confirmed the importance of the discovery, and are planning to collaborate with Kläui to build a prototype. Kläui envisions a system in which millions or even billions of nanowires are embedded in an epoxy resin, providing enormous capacity on a shock-proof platform. A market-ready device could be available in as little as 5-7 years.

Racetrack memory promises to be a breakthrough in data storage and retrieval. Racetrack-equipped computers would boot up instantly, and their information could be accessed 100,000 times more rapidly than with a traditional hard disk.

They would also save energy. RAM needs to be powered every millionth of a second, so an idle computer consumes about .3 Watts just maintaining data in RAM. Because Racetrack memory doesn’t have this constraint, energy consumption could be slashed by a factor of 300, to a mere 1 mW. It’s an important consideration: computing and electronics currently consumes 6% of worldwide electricity, and is forecast to increase to 15% by 2025


When my partners and I joined MySpace, we were lucky enough to be at the leading edge of the social revolution that changed how we use the Internet. A new groundswell is coming, transforming the web once again: the personal revolution.

Information Overload

Today, we live in a world where we’re constantly overwhelmed by information. There are over 90M tweets per day, 34 hours of YouTube video uploaded every minute, and every Facebook user has an average of 130 friends who are becoming more and more active all the time. We also experience this with content farms flooding search results and with the thousands of articles available everyday on traditional websites like the New York Times and ESPN: of which only a handful appeal to each of our individual interests.

The rampant proliferation of information isn’t a new phenomenon. The signal-to-noise ratio on the web has fluctuated substantially as new technology to organize information has battled with new technology to create and distribute information.

Their Web: The Early Days of The Internet

In the early days, content was created and organized by professionals. At first, it was contained in networks like AOL, one of the pioneers of the Internet. As the Internet opened up, Yahoo! brilliantly organized the open web with Yahoo! Directory. But eventually the volume of the information overloaded even the directory, and search companies like Google introduced a better way to find content we were interested in. By understanding how sites linked to each other, Google applied new science to find a solution within the problem itself. It worked so well, every website is search engine optimized for this framework.

Our Web: Present Day

In 2003, user-generated content hit the mainstream via sites like MySpace and YouTube, and the volume of information being created increased dramatically.

“Every two days, we create as much information as we did up to 2003.” –Eric Schmidt, CEO of Google

Search engines weren’t designed to effectively organize this social and real-time data. So innovative companies like Facebook and Twitter created a social filter by empowering our friends and people we trust to organize information for us. This new filter has given us access to more and better information than we ever thought possible. Like search, it’s so effective, every website is socially optimized for this framework.

Many of you reading this are avid users of social technology. Like me, you’re probably beginning to experience information overload in your social streams. There’s great content there, but it’s getting increasingly difficult to find it. In engineering terms, the signal-to-noise ratio is dropping (or, as a corollary, the work-to-reward ratio is increasing). And, as more people become more active in the social and real-time web, the problem will only get worse.

Your Web: The Future

Imagine opening up any web page or application and being presented with an experience that’s entirely personalized to you. Go to and see stories about the sports you love and teams you follow featured on the top. Check your daily Groupon for deals that map to your interests. Receive updates from Foursquare about restaurants you’ll want to visit. This is where things are headed. It’s about shifting from you trying to find the right information to the right information finding you.

In the past, we lacked the data and the technology to make this type of personal experience a reality. But that’s changing quickly. The abundant social data that’s overwhelming our social streams not only presents a problem but the solution. Using natural language processing and semantic analysis to evaluate your tweets, status updates, like, shares, and check-ins, it’s possible to build a holistic understanding of who you are and what you’re interested in.

Once the web knows your interests, it can start to change… Any website or app can use knowledge of your interests in order to give you a personal experience.

Music followed a similar evolutionary path. Music discovery has grown from being curated by professionals (DJ’s, MTV) to being introduced socially (mixed tapes, playlists) to being organized around your personal interests (Pandora).

All of this doesn’t mean that editors go away or your friends’ referrals don’t matter. Rather, it’s a new lens focused entirely on you.

Building the Personal Web: Enter Gravity’s Interest Graph

Incredible academic and commercial research in the fields of natural language processing and semantic technology has built the groundwork for where we are today. Still we have a long way to go before the personal web is a reality. Gravity will be one of many companies working on the personal web in the coming years. Our platform will allow partners to personalize their experiences when a user connects to the service. The basis for our platform is what we call the Interest Graph, an online representation of your interests, including your strength of attachment and its trajectory over time.


A pioneering research effort could shrink the world’s most powerful supercomputer processors to the size of a sugar cube, IBM scientists say.

The approach will see many computer processors stacked on top of one another, cooling them with water flowing between each one.

The aim is to reduce computers’ energy use, rather than just to shrink them.

Some 2% of the world’s total energy is consumed by building and running computer equipment.

Dr Michel and his team have already built a prototype to demonstrate the water-cooling principle. Called Aquasar, it occupies a rack larger than a refrigerator.

IBM estimates that Aquasar is almost 50% more energy-efficient than the world’s leading supercomputers.

“We currently have built this Aquasar system that’s one rack full of processors. We plan that 10 to 15 years from now, we can collapse such a system in to one sugar cube — we’re going to have a supercomputer in a sugar cube.”

PHYSORG SAYS: Researchers in Europe have created a robot that uses its body to learn how to think. It is able to learn how to interact with objects by touching them without needing to rely on a massive database of instructions for every object it might encounter


Researchers at the Stanford University School of Medicine, applying a state-of-the-art imaging system to brain-tissue samples from mice, have been able to quickly and accurately locate and count the myriad connections between nerve cells in unprecedented detail, as well as to capture and catalog those connections’ surprising variety.

A typical healthy human brain contains about 200 billion neurons, linked to one another via hundreds of trillions of synapses. One neuron may make as many as tens of thousands of synaptic contacts with other neurons, said Stephen Smith, PhD, professor of molecular and cellular physiology and senior author of a paper describing the study, published Nov. 18 in Neuron.

Because synapses are so minute and packed so closely together, it has been hard to get a handle on the complex neuronal circuits that do our thinking, feeling and activation of movement.The new method works by combining high-resolution photography with specialized fluorescent molecules that bind to different proteins and glow in different colors. Massive computing power captures this information and converts it into imagery.

Examined up close, a synapse — less than a thousandth of a millimeter in diameter — is a specialized interface consisting of the edges of two neurons, separated by a tiny gap. Chemicals squirted out of the edge of one neuron diffuse across the gap, triggering electrical activity in the next and thus relaying a nervous signal. There are perhaps a dozen known types of synapses, categorized according to the kind of chemical employed in them. Different synaptic types differ correspondingly in the local proteins, on one abutting neuron or the other, that are associated with the packing, secretion and uptake of the different chemicals.

Synapse numbers in the brain vary over time. Periods of massive proliferation in fetal development, infancy and adolescence give way to equally massive bursts of “pruning” during which underused synapses are eliminated, and eventually to a steady, gradual decline with increasing age. The number and strength of synaptic connections in various brain circuits also fluctuate with waking and sleeping cycles, as well as with learning. Many neurodegenerative disorders are marked by pronounced depletion of specific types of synapses in key brain regions.

In particular, the cerebral cortex — a thin layer of tissue on the brain’s surface — is a thicket of prolifically branching neurons. “In a human, there are more than 125 trillion synapses just in the cerebral cortex alone,” said Smith. That’s roughly equal to the number of stars in 1,500 Milky Way galaxies, he noted.

Synapses in the brain are crowded in so close together that they cannot be reliably resolved by even the best of traditional light microscopes, Smith said. “Now we can actually count them and, in the bargain, catalog each of them according to its type.”

Array tomography, an imaging method co-invented by Smith and Kristina Micheva, PhD, who is a senior staff scientist in Smith’s lab, was used in this study as follows: A slab of tissue — in this case, from a mouse’s cerebral cortex — was carefully sliced into sections only 70 nanometers thick. These ultrathin sections were stained with antibodies designed to match 17 different synapse-associated proteins, and they were further modified by conjugation to molecules that respond to light by glowing in different colors.

The antibodies were applied in groups of three to the brain sections. After each application huge numbers of extremely high-resolution photographs were automatically generated to record the locations of different fluorescing colors associated with antibodies to different synaptic proteins. The antibodies were then chemically rinsed away and the procedure was repeated with the next set of three antibodies, and so forth. Each individual synapse thus acquired its own protein-composition “signature,” enabling the compilation of a very fine-grained catalog of the brain’s varied synaptic types.

All the information captured in the photos was recorded and processed by novel computational software, most of it designed by study co-author Brad Busse, a graduate student in Smith’s lab. It virtually stitched together all the slices in the original slab into a three-dimensional image that can be rotated, penetrated and navigated by the researchers.

The Stanford team used brain samples from a mouse that had been bioengineered so that particularly large neurons that abound in the cerebral cortex express a fluorescent protein, normally found in jellyfish, that glows yellowish-green. This let them visualize synapses against the background of the neurons they linked.

The researchers were able to “travel” through the resulting 3-D mosaic and observe different colors corresponding to different synaptic types just as a voyager might transit outer space and note the different hues of the stars dotting the infinite blackness. A movie was also created by this software.

This level of detailed visualization has never been achieved before, Smith said. “The entire anatomical context of the synapses is preserved. You know right where each one is, and what kind it is,” he said.

Observed in this manner, the brain’s overall complexity is almost beyond belief, said Smith. “One synapse, by itself, is more like a microprocessor —with both memory-storage and information-processing elements — than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth,” he said.

In the course of the study, whose primary purpose was to showcase the new technique’s application to neuroscience, Smith and his colleagues discovered some novel, fine distinctions within a class of synapses previously assumed to be identical. His group is now focused on using array tomography to tease out more such distinctions, which should accelerate neuroscientists’ progress in, for example, identifying how many of which subtypes are gained or lost during the learning process, after an experience such as traumatic pain, or in neurodegenerative disorders such as Alzheimer’s. With support from the National Institutes of Health, Smith’s lab is using array tomography to examine tissue samples from Alzheimer’s brains obtained from Stanford and the University of Pennsylvania.

“I anticipate that within a few years, array tomography will have become an important mainline clinical pathology technique, and a drug-research tool,” Smith said. He and Micheva are founding a company that is now gathering investor funding for further work along these lines. Stanford’s Office of Technology Licensing has obtained one U.S. patent on array tomography and filed for a second.

The Neuron study was funded by the NIH, the Gatsby Charitable Trust, the Howard Hughes Medical Institute, Stanford’s Bio-X program and a gift from Lubert Stryer, MD, the emeritus Mrs. George A. Winzer Professor of Cell Biology in the medical school’s Department of Neurobiology. Other Stanford co-authors of the paper were neuroscience graduate student Nicholas Weiler and senior research scientist Nancy O’Rourke, PhD.


People who die in Hollywood movies often find themselves floating around on a cloud as angels. Now a startup in Huntsville, Alabama, will let you go to a different kind of cloud after you die: the computing kind.

The two-year-old company, called Intellitar, lets people create intelligent avatars or “intellitars” of themselves now, so they can spend time with their ancestors forever. The avatars are designed to look and talk like their creators, who stock their virtual selves with information to pass on to future generations through virtual conversations.

“We’ve become accustomed to archiving many things: pictures, video, documents, recordings … why not archive yourself?” said Don Davidson, CEO and cofounder of Intellitar.

Conversing with ancestors is an age-old dream. In the 19th and early 20th centuries, auditoriums and private parlors all over North America and Europe hosted “spirit mediums” who claimed to channel the minds and personalities of the dead. After going into a trance, they would start speaking in someone else’s voice and answer questions posed by descendants left behind. Though some well-known mediums were exposed as frauds, millions of people followed Spiritualism as a religion at its height.

Intellitar is bringing this quest into the Facebook era, letting people represent themselves in the virtual afterlife. Instead of trances and spells, Virtual Eternity uses a combination of text-to-speech technology, artificial intelligence and animation tools, all running on the Rackspace cloud-computing infrastructure. It’s currently in “live beta,” but it’s set for commercial launch by the end of this year, Davidson said. After that, Intellitar has other ideas for how its technology could be used, which it isn’t disclosing yet.

To create an intellitar, a user sets up a free or paid account, uploads a single digital portrait photo, and adds information by answering predesigned questions and submitting text.
To make the avatar speak, Intellitar offers a selection of prepared digital voices. As with other elements of Virtual Eternity, a paid account (starting at US$5.95 per month) offers more options. Free members get four sample voices, while those with paid accounts get a selection of 12, which is also being expanded. These are standard voices created by professional voice artists who speak a long series of words and phrases. The recordings can be broken down and then recombined into whatever the avatar wants to say, Davidson said.

If the idea of your late grandmother speaking to you in the voice of a well-trained stranger is less than heartwarming, Intellitar is also preparing an alternative approach. By year’s end, the company plans to offer a tool for about $100 that will let your grandmother “train” the avatar in her own voice.

Along with a voice, the avatar needs a face, which is based on the .jpg image you uploaded but can move, thanks to digital animation. The lips, facial muscles and head move as the avatar talks. The avatar’s eyes even blink, though this effect still has a bit of a house-of-horror look. Intellitar tried to find a compromise between a good moving image and available computing resources, and the company is working toward a more realistic experience, Davidson said. A demonstration, featuring Davidson’s own intellitar, is available on the company’s website.

Once the avatar gets a voice and a face, the real person can start to give it an inner life.
“You step through a 40-question personality test, and that’s designed to identify kind of a ‘baseline brain,’ where we come back and recommend a brain,” Davidson said. For example, Intellitar may recommend giving the avatar an extroverted personality. Through another set of questions, which can include 30 or more topics with 50 or more questions in each category, the user fills the avatar’s brain with information loved ones may want to know. The questions are designed to draw out information such as where the user was born and lived, what they did for a living, their likes and dislikes, and their memories of other family members.

The user can also add knowledge through written texts and other content. Plug-ins will even allow a user to give the avatar information he or she never had, such as expert advice on fly-fishing, Davidson said.

When someone starts to have a conversation with the avatar, the artificial intelligence engine will piece together answers from the provided data and the avatar will deliver those answers like a speaking person. More traditional keepsakes such as photos, videos and documents can also be uploaded and provided to viewers, with narration by the avatar.

One early user of Virtual Eternity, Nick Lioce, said it was ideal for him and his wife to pass on their cultural heritage to their children.

“The setup and training are simple and straightforward. I guess [the only] negative would be that you could spend a lot of time training your intellitar, meaning you really get into the concept of documenting and archiving your family history,” Lioce said in an e-mail interview.

An individual membership to Intellitar costs $5.95 per month or $64.95 per year. A family membership, including four intellitars, unlimited storage, and other features, costs $24.95 per month or $274.95 per year. In the next few months, the company plans to add a “family tree” feature for paid members, which will form a tree of family members’ avatars displayed in virtual 3D. Other features coming in the next few months include a speech-to-text tool that will allow people to converse with an avatar, using a headset, rather than typing in their questions
Virtual Eternity isn’t solely about being around for eternity. It can also be used among friends or family members who don’t see each other often, Davidson said.

“We envision it as a living family tree, a living scrapbook,” Davidson said. “It survives a person that may be in it, but it certainly could be used while all of the members of the family are alive.”
Compiling information about family members digitally, and maintaining it in a cloud, has advantages over simply storing old papers, tapes and discs. For one thing, if the past few decades are any guide, there are many new generations of digital media ahead, and it will be hard for most consumers to keep updating their mementos. This is one of Intellitar’s selling points, though Davidson was realistic about the future.


Small, lightweight, hands-free cameras — worn on a headband, for example, or tucked over an ear — will record life’s memorable moments as they unfold.

The Looxcie ($199), a small wearable camcorder introduced recently, loops over the ear. The camera is built into a Bluetooth headset that streams 480 x 320 pixel digital images at 15 frames per second wirelessly to Android phones that use a free Looxcie app. From there, the clips can go directly to e-mail.

The GoPro HD Hero 960 records high-definition video at 1,280 x 960 pixels and 30 frames a second. The lens can capture photos or video at an ultrawide, 170-degree angle.

As wearable cameras gain popularity, and as some amateur auteurs capture candid images of people with no wish to star on the Internet, the devices are sure to raise privacy and other issues.


In the creation of the film Avatar, director James Cameron invented a system called Simul-cam. It allowed him to see the video output of the cameras, in real time, but with the human actors digitally altered to look like the alien creatures whom they were playing.

Motus, a commercial version of the system, will be manufactured by gaming hardware company Razer, and should work on any home PC. It is expected to be available for purchase as of early next year, for under $161.

Users of the Motus system hold two Sixense electromagnetic motion-sensitive controllers (like the Wii controllers), and see their environment through a virtual camera – just like the environments of existing video games and animations are already seen. In this system, however, they can look around their virtual world simply by moving one of the controllers, as if it were a camcorder.


The Wrap 920AR augmented reality (AR) glasses from Vuzix mix virtual information, like text or images, into your view of the real world in real time.

he displays are connected to two video cameras that sit outside of the glasses in front of the eyes. The screens show each eye a slightly different view of the world, mimicking natural human vision, which allows for depth perception. Accelerometers, gyro sensors, and magnetometers track the direction in which the wearer is looking.

Priced at $1999, including Autodesk 3ds Max software, the glasses are intended for gamers, animators, architects, and software developers.


The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, will provide expertise to a multi-year technology investment program to develop the next generation of extreme scale supercomputers.

The project is part of the Ubiquitous High Performance Computing (UHPC) program, run by the Defense Advanced Research Projects Agency (DARPA), part of the U.S. Department of Defense. Intel Corporation leads one of the winning teams in the program, and is working closely with SDSC researchers on applications.

The first two phases of the project extend into 2014, and a full system design and simulation is expected at the completion of those phases in 2014. Phases 3 and 4 of the project, which have not yet been awarded, are expected to result in a full prototype system sometime in 2018.

During the first phases of the award, SDSC’s Performance Modeling and Characterization (PMaC) laboratory will assist the Intel-DARPA project by analyzing and mapping strategic applications to run efficiently on Intel hardware.  Applications of interest include rapid processing of real-time sensor data, establishing complex connectivity relationships within graphs (think of determining “six degrees of Kevin Bacon” relationships on Facebook), and complex strategy planning.

Energy consumption at extreme scales is one of the formidable challenges to be taken on by the Intel team. Today’s top supercomputers operate at the petascale level, which means the ability to perform one thousand trillion calculations per second. The next level is exascale, or achieving computing speeds of one million trillion calculations per second – one thousand times faster than today’s machines.

According to Intel, the project will focus on new circuit topologies, new chip and system architectures, and new programming techniques to reduce the amount of energy required per computation by two to three orders of magnitude. That means such extreme scale systems will have to require 100 to 1,000 times less energy per computation than what today’s most efficient computing systems consume.

“We are working to build an integrated hardware/software stack that can manage data movement with extreme efficiency,” said Allan Snavely, associate director of SDSC and head of the supercomputer center’s PMaC lab. “The Intel team includes leading experts in low-power device design, optimizing compilers, expressive program languages, and high-performance applications, which is PMaC’s special expertise.”

According to Snavely, all these areas must work in a coordinated fashion to ensure that one bit of information is not moved further up or down the memory hierarchy than need be.

“Today’s crude and simplistic memory cache and prefetch policies won’t work at the exascale level because of the tremendous energy costs associated with that motion,” he said. “Today it takes a nano joule (a billionth of a joule, a joule being the amount of energy needed to produce one watt of power for one second) to move a byte even a short distance. Multiply that byte into an exabyte (one quintillion bytes) and one would need a nuclear plant’s worth of instantaneous power to move it based on today’s technology.”

Intel’s other partners for the project include top computer science and engineering faculty at the University of Delaware and the University of Illinois at Urbana-Champaign, as well as top industrial researchers at Reservoir Labs and ET International.

DARPA’s UHPC program directly addresses major priorities expressed by President Obama’s “Strategy for American Innovation”, according to a DARPA release issued earlier this month. These priorities include exascale supercomputing Century “Grand Challenge”, energy-efficient computing, and worker productivity. The resulting UHPC capabilities will provide at least 50 times greater energy, computing and productivity efficiency, which will slash the time needed to design and develop complex computing applications.


A new generation of much more sophisticated and lifelike prosthetic arms, sponsored by DARPA, may be available within the next five to 10 years. Two different prototypes that move with the dexterity of a natural limb and can theoretically be controlled just as intuitively — with electrical signals recorded directly from the brain — are now beginning human tests.

The new designs have about 20 degrees of independent motion, a significant leap over existing prostheses (with just two or three) and they can be operated via a variety of interfaces. One device, developed by DEKA Research and Development, can be consciously controlled using a system of levers in a shoe to pick up boxes, operate a drill, and even use chopsticks.


A new device developed by Affectiva detects and records physiological signs of stress and excitement by measuring slight electrical changes in the skin.

Affectiva’s Q Sensor is worn on a wristband and lets people keep track of stress during everyday activities. The Q Sensor stores or transmits a wearer’s stress levels throughout the day, giving doctors, caregivers, and patients themselves a new tool for observing reactions.

She adds that having clues to a person’s stress levels, which might not otherwise be detectable, could give caregivers and researchers more insight — and possibly a way to anticipate — the harmful behaviors of autism, such as head banging. Caregivers can try to identify and block sources of stress and learn what activities restore calm.


Members of the public could form the backbone of powerful new mobile Internet networks by carrying wearable sensors, according to researchers from Queen’s University Belfast.

The novel sensors could create new ultra-high-bandwidth mobile Internet infrastructures and reduce the density of mobile phone base stations. The engineers from Queen’s  Institute of Electronics, Communications and Information Technology (ECIT), are working on a new project based on the rapidly developing science of body-centric communications.

The researchers at ECIT are investigating how small sensors carried by members of the public, in items such as next-generation smartphones, could communicate with each other to create body-to-body networks (BBNs). The new sensors would interact to transmit data, providing “anytime, anywhere” mobile network connectivity.

Dr. Simon Cotton, from ECIT’s wireless communications research group said: “In the past few years a significant amount of research has been undertaken into antennas and systems designed to share information across the surface of the human body. Until now, however, little work has been done to address the next major challenge, which is one of the last frontiers in wireless communication: how that information can be transferred efficiently to an off-body location.

“The availability of body-to-body networks could bring great social benefits, including significant healthcare improvements through the use of bodyworn sensors for the widespread, routine monitoring and treatment of illness away from medical centeRs. This could greatly reduce the current strain on health budgets and help make the Government’s vision of healthcare at home for the elderly a reality.”

It could also bring improvements in mobile gaming and precision monitoring of athletes and real-time tactical training in team sports.

“If the idea takes off, BBNs could also lead to a reduction in the number of base stations needed to service mobile phone users, particularly in areas of high population density,” Cotton said. “This could help to alleviate public perceptions of adverse health associated with current networks and be more environmentally friendly due to the much lower power levels required for operation.

“Our work at Queen’s involves collaborating with national and international academic, industrial and institutional experts to develop a range of models for wireless channels required for body centric communications. These will provide a basis for the development of the antennas, wireless devices and networking standards required to make BBNs a reality.

“Success in this field will not only bring major social benefits it could also bring significant commercial rewards for those involved. Even though the market for wearable wireless sensors is still in its infancy, it is expected to grow to more than 400 million devices annually by 2014.”

KURZWEILAI SAYS: University of Miami (UM) developmental psychologists and computer scientists from the University of California in San Diego (UC San Diego) are studying infant-mother interactions and working to implement their findings in a baby robot capable of learning social skills. The objectives are to help unravel the mysteries of human cognitive development and reach new the frontiers in robotics.

The first phase of the project was studying face-to-face interactions between mother and child, to learn how predictable early communication is, and to understand what babies need to act intentionally. The findings are published in the current issue of the journal Neural Networks in a study titled “Applying machine learning to infant interaction: The development is in the details.”

The scientists examined 13 mothers and babies between 1 and 6 months of age, while they played during five minute intervals weekly. There were approximately 14 sessions per dyad. The laboratory sessions were videotaped and the researchers applied an interdisciplinary approach to understanding their behavior.

The researchers found that in the first six months of life, babies develop turn-taking skills, the first step to more complex human interactions. According to the study, babies and mothers find a pattern in their play, and that pattern becomes more stable and predictable with age,explains Daniel Messinger, associate professor of Psychology in the UM College of Arts and Sciences and principal investigator of the study.

“As babies get older, they develop a pattern with their moms,” says Messinger. “When the baby smiles, the mom smiles; then the baby stops smiling and the mom stops smiling, and the babies learn to expect that someone will respond to them in a particular manner,” he says. “Eventually the baby also learns to respond to the mom.”

The next phase of the project is to use the findings to program a baby robot with basic social skills and with the ability to learn more complicated interactions. The robot’s name is Diego-San. He is 1.3 meters tall and modeled after a 1-year-old child. The construction of the robot was a joint venture between Kokoro Dreams and the Machine Perception Laboratory at UC San Diego.

The robot will need to shift its gaze from people to objects based on the same principles babies seem to use as they play and develop. “One important finding here is that infants are most likely to shift their gaze, if they are the last ones to do so during the interaction,” says Messinger. “What matters most is how long a baby looks at something, not what they are looking at.”

Ultimately, the baby robot will give scientists understanding on what motivates a baby to communicate and will help answer questions about the development of human learning. This study is funded by National Science Foundation.


Princeton computer scientists have developed a new way of tracing the origins and spread of ideas, a technique that could make it easier to gauge the influence of notable scholarly papers, buzz-generating news stories and other information sources.

The method relies on computer algorithms to analyze how language morphs over time within a group of documents — whether they are research papers on quantum physics or blog posts about politics — and to determine which documents were the most influential.

“The point is being able to manage the explosion of information made possible by computers and the Internet,” said David Blei, an assistant professor of computer science at Princeton and the lead researcher on the project. “We’re trying to make sense of how concepts move around. Maybe you want to know who coined a certain term like ‘quark,’ or search old news stories to find out where the first 1960s antiwar protest took place.”

Blei said the new search technique might one day be used by historians, political scientists and other scholars to study how ideas arise and spread.

While search engines such as Google and Bing help people sort through the haystack of information on the Web, their results are based on a complex mix of criteria, some of which — such as number of links and visitor traffic — may not fully reflect the influence of a document.

Scholarly journals traditionally quantify the impact of a paper by measuring how often it is cited by other papers, but other collections of documents, such as newspapers, patent claims and blog posts, provide no such means of measuring their influence.

Instead of focusing on citations, Blei and Sean Gerrish, a Princeton doctoral student in computer science, developed a statistical model that allows computers to analyze the actual text of documents to see how the language changes over time. Influential documents in a field will establish new concepts and terms that change the patterns of words and phrases used in later works.

“There might be a paper that introduces the laser, for instance, which is then mentioned in subsequent articles,” Gerrish said. “The premise is that one article introduces the language that will be adopted and used in the future.”

Previous methods developed by the researchers for tracking how language changes accounted for how a group of documents influenced a subsequent group of documents, but were unable to isolate the influence of individual documents. For instance, those models can analyze all the papers in a certain science journal one year and follow the influence they had on the papers in the journal the following year, but they could not say if a certain paper introduced groundbreaking ideas.

To address this, Blei and Garrish developed their algorithm to recognize the contribution of individual papers and used it to analyze several decades of reports published in three science journals: Nature, the Proceedings of the National Academy of Sciences and the Association for Computational Linguistics Anthology. Because they were working with scientific journals, they could compare their results with the citation counts of the papers, the traditional method of measuring scholarly impact.

They found that their results agreed with citation-based impact about 40 percent of the time. In some cases, they discovered papers that had a strong influence on the language of science, but were not often cited. In other cases, they found that papers that were cited frequently did not have much impact on the language used in a field.

They found no citations, for instance, for an influential column published in Nature in 1972 that correctly predicted an expanded role of the National Science Foundation in funding graduate science education.

On the other hand, their model gave a low influence score to a highly cited article on a new linguistics research database that was published in 1993 in the Association for Computational Linguistics Anthology. “That paper introduced a very important resource, but did not present paradigm-changing ideas,” Blei said. “Consequently, our language-based approach could not correctly identify its impact.”

Blei said their model was not meant as a replacement for citation counts but as an alternative method for measuring influence that might be extended to finding influential news stories, websites, and legal and historical documents.

“We are also exploring the idea that you can find patterns in how language changes over time,” he said. “Once you’ve identified the shapes of those patterns, you might be able to recognize something important as it develops, to predict the next big idea before it’s gotten big.”

Adapted from materials provided by Princeton University


After Gap’s new logo failed, Neuromarketing company NeuroFocus used EEG and eye-tracking techniques to investigate the neural responses of a group of volunteers who were shown both Gap logos.

Neurofocus explained that the new logo didn’t register as novel or stylish in the volunteer’s brains, two big no-nos for a successful logo. The old logo on the other hand was a big hit, scoring high in the company’s “stylish” metric.


Scientists with the Lawrence Berkeley National Laboratory (Berkeley Lab) have designed an electrical link to living cells engineered to shuttle electrons across a cell’s membrane to an external acceptor along a well-defined path. This direct channel could yield cells that can read and respond to electronic signals, electronics capable of self-replication and repair, or efficiently transfer sunlight into electricity.

Coaxing electrons across a cellular membrane is not trivial: attempts to pull an electron from a cell may disrupt its function, or kill the entire cell in the process. What’s more, current techniques to transfer cellular electrons to an external source lack a molecular roadmap, which means even if electrons do turn up outside a cell, there is no way to direct their behavior, see where they stopped along the way, or send a signal back to the cell’s interior.

So the researchers first cloned a part of the extracellular electron transfer chain of Shewanella oneidensis MR-1, marine and soil bacteria capable of reducing heavy metals in oxygen-free environments. This chain or “genetic cassette” is essentially a stretch of DNA that contains the instructions for making the electron conduit. Additionally, because all life as we know it uses DNA, the genetic cassette can be plugged into any organism. The team showed this natural electron pathway could be popped into a (harmless) strain of E. coli — a versatile model bacteria in biotechnology — to precisely channel electrons inside a living cell to an inorganic mineral: iron oxide, also known as rust.

Bacteria in environments without oxygen, such as Shewanella, use iron oxide from their surroundings to breathe. As a result, these bacteria have evolved mechanisms for direct charge transfer to inorganic minerals found deep in the sea or soil. The Berkeley Labs team showed that their engineered E. coli could efficiently reduce iron and iron oxide nanoparticles — the latter five times faster than E. coli alone.

“This recent breakthrough is part of a larger Department of Energy project on domesticating life at the cellular and molecular level. By directly interfacing synthetic devices with living organisms, we can harness the vast capabilities of life in photo and chemical energy conversion, chemical synthesis, and self-assembly and repair,” said Jay Groves, a faculty scientist at Berkeley Labs and professor of chemistry at University of California, Berkeley.

“Cells have sophisticated ways of transferring electrons and electrical energy. However, just sticking an electrode into a cell is about as ineffective as sticking your finger into an electrical outlet when you are hungry. Instead, our strategy is based on tapping directly into the molecular electron transport chain used by cells to efficiently capture energy.”

The researchers plan to implement this genetic cassette in photosynthetic bacteria, as cellular electrons from these bacteria can be produced from sunlight — providing cheap, self-replicating solar batteries. These metal-reducing bacteria could also assist in producing pharmaceutical drugs, as the fermentation step in drug manufacturing requires energy-intensive pumping of oxygen. In contrast, these engineered bacteria breathe using rust, rather than oxygen, saving energy.

A paper reporting this research titled, “Engineering of a synthetic electron conduit in living cells,” appears in Proceedings of the National Academy of Sciences and is available to subscribers online.

Adapted from materials provided by Berkeley Labs


Oct 15, 2010 — The U.S. military has been funding development of flexible OLEDs with long enough lifetimes and consistent quality, with the aim of providing soldiers with rugged, thin communications devices that can display maps and video without adding too much weight to their load.

NEW YORK TIMES SAYS: BodyMedia has announced that its armband sensors will be able to communicate with smartphones, and wirelessly, using Bluetooth. Its health sensors will be one of the first devices, other than ear buds, that link to smartphones with Bluetooth short-range communications.

It opens the door to allowing a person to monitor a collection of the 9,000 variables — physical activity, calories burned, body heat, sleep efficiency and others — collected by the sensors in a BodyMedia armband in real-time, as the day goes on


By comparing a clearly defined visual input with the electrical output of the retina, researchers at the Salk Institute for Biological Studies were able to trace for the first time the neuronal circuitry that connects individual photoreceptors with retinal ganglion cells, the neurons that carry visuals signals from the eye to the brain.

Their measurements, published in the Oct. 7, 2010, issue of the journal Nature, not only reveal computations in a neural circuit at the elementary resolution of individual neurons but also shed light on the neural code used by the retina to relay color information to the brain.

“Nobody has ever seen the entire input-output transformation performed by complete circuits in the retina at single-cell resolution,” says senior author E.J. Chichilnisky, Ph.D., an associate professor in the Systems Neurobiology Laboratories. “We think these data will allow us to more deeply understand neuronal computations in the visual system and ultimately may help us construct better retinal implants.”


Computer scientists predict that a new generation of malware will mine social networks for people’s private patterns of behavior: the pattern of links between individuals and their behavior in the network–how often they email or call each other, how information spreads between them and so on.

The idea would be to release some kind of malware that records the patterns of links in a network. This kind of malware will be very hard to detect, says Yaniv Altshuler at Ben Gurion University and associates. They’ve studied the strategies that best mine behavioural pattern data from a real mobile phone network consisting of 800,000 links between 200,000 phones. (They call this type of attack “Stealing Reality”.)

TECHNOLOGY REVIEW SAYS: In the near future, all citizens will wear a centrally-controlled, super iPhone that tracks your movements and can scan everyone around you to divulge their net worth, their shopping history and their dating potential.


A team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human.

The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. NELL also learns facts that are relations between members of two categories.


Intel Labs has created an experimental “Single-chip Cloud Computer,” (SCC) a research microprocessor containing the most Intel Architecture cores ever integrated on a silicon CPU chip – 48 cores. It incorporates technologies intended to scale multi-core processors to 100 cores and beyond, such as an on-chip network, advanced power management technologies and support for “message-passing.”
Architecturally, the chip resembles a cloud of computers integrated into silicon. The novel many-core architecture includes innovations for scalability in terms of energy-efficiency including improved core-core communication and techniques that enable software to dynamically configure voltage and frequency to attain power consumptions from 125W to as low as 25W.


Eugenio Culurciello of Yale’s School of Engineering & Applied Science has developed a supercomputer based on the ventral pathway of the mammalian visual system. Dubbed NeuFlow, the system mimicks the visual system’s neural network to quickly interpret the world around it.

The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One idea — the one Culurciello and LeCun are focusing on — is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road—such as other cars, people, stoplights, sidewalks, and the road itself—NeuFlow processes tens of megapixel images in real time.

The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (less than the power a cell phone uses) to accomplish what it takes benchtop computers with multiple graphic processors more than 300 watts to achieve.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

Culurciello embedded the supercomputer on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Other collaborators include Clement Farabet (Yale University and New York University), Berin Martini, Polina Akselrod, Selcuk Talay (Yale University) and Benoit Corda (New York University).


A massive new project to scan the brains of 1,200 volunteers could finally give scientists a picture of the neural architecture of the human brain and help them understand the causes of certain neurological and psychological diseases.

The National Institutes of Health announced $40 million in funding this month for the five-year effort, dubbed the Human Connectome Project. Scientists will use new imaging technologies, some still under development, to create both structural and functional maps of the human brain.

The researchers also plan to collect genetic and behavioral data, testing participants’ sensory and motor skills, memory, and other cognitive functions, and deposit this information along with brain scans in a public database that can be used to search for the genetic and environmental factors that influence the structure of the brain.

David Van Essen, a neuroscientist at Washington University and his collaborators plan to scan participants using two relatively recent variations on MRI. Diffusion imaging, which detects the flow of water molecules down insulated neural wires, indirectly measures the location and direction of the fibers that connect one part of the brain to another. Functional connectivity, in contrast, examines whether activity in different parts of the brain fluctuates in synchrony. The regions that are highly correlated are most likely to be connected, either directly or indirectly. Combining both approaches will give scientists a clearer picture.


More and more, computers will serve to “augment humanity” by filtering and directing relevant information to users, Google chief executive Eric Schmidt said Tuesday.

Schmidt said that search traffic tripled throughout the first half of 2010, and highlighted Google Goggles and Google Translate as two services that can use the smartphone as a sensor, passing information up to the service that’s stored in the cloud.

Google is “building social information into all of our products. So it won’t be a social network the way people think of Facebook, but rather social information about who your friends are, people you interact with and we have various ways we’ll be collecting that information.”

KURZWEILAI SAYS: University of Michigan scientists have created the smallest pixels available that will enable LED, projected, and wearable displays to be more energy-efficient with more light manipulation possible, all on a display that may eventually be as small as a postage stamp


The humanoid robot iCub has learned a new skill: archery. After being taught how to hold a bow and shoot an arrow, it learned for itself how to improve its aim, and was so successful it could hit a bullseye after only eight trials.

The algorithm used to teach iCub, developed by Dr. Petar Kormushev and colleagues of the Italian Institute of Technology, is called the Augmented Reward Chained Regression (ARCHER), a chained vector regression algorithm that uses experience gained from each trial to fine-tune the next attempt by modulating and coordinating the movements of the robot’s hands.

Movements of the arms are controlled by an inverse kinematics controller. After each shot a camera takes a picture of the target and an image recognition system based on Gaussian Mixture Models determines where the tip of the arrow hit the target by filtering the colored pixels of the picture based on their likelihood of belonging to the target or the arrow head. This information is then used as feedback for the algorithm.


Researchers at Sun Yat-Sen University in China have demonstrated a way to record on ferromagnetic films using laser-assisted ultrafast magnetization reversal dynamics. The development will allow for practical use of new technology for recording more than 6,000 terabits (6 petabits) of data on a single 5-inch disc, using ultra-high-density magneto-optical storage devices.

The new ultrafast recording technique uses “time-resolved polar Kerr spectroscopy” combined with an alternating magnetic field strong enough to re-initialize the magnetization state of gadolinium-iron-cobalt (GdFeCo) thin films. The researchers showed that the magnetization reversal could occur on a sub-nanosecond time scale, which implies that next-generation magneto-optical storage devices can not only realize higher recording densities but also ultrafast data writing of up to a gigahertz — at least thirty times faster than that of present hard disks in computers.

Laser-assisted magnetic recording was demonstrated on a sub-picosecond time scale under a saturated external magnetic field. “We found that the rate of magnetization reversal is proportional to the external magnetic field,” says Tianshu Lai, “and the genuine thermo-magnetic recording should happen within several tens to hundreds of picoseconds when we apply a smaller magnetic field than the coercivity of the recording films.”


“We are entering a neurotechnology renaissance, in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate,” says Ed Boyden, an Assistant Professor, Biological Engineering, and Brain and Cognitive Sciences at the MIT Media Lab..

“Consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient’s environment or the instantaneous state of the patient’s brain.

“Some examples of this kind of ‘brain coprocessor’ technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions–providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.

“In the future, the computational module of a brain coprocessor may be powerful enough to assist in high-level human cognition or complex decision making.”


In his keynote speech at the Intel Developer Forum Wednesday, Intel vice president and chief technology officer Justin Rattner focused on “context-aware computing,” in which devices anticipate your needs and desires and help fulfill them—before you even ask.

He described future devices that will constantly learn about you and your preferences based on how you use the device. For example:

  • Recommending restaurants based on what the user likes and eats, and the user’s location in the city
  • A remote control that “enhances the smart TV experience” by recognizing who’s holding a remote control and adjusting the viewing experience accordingly
  • A sense system, roughly the size of a large cell phone, that could animate avatars to let you know a person’s current activity or state of activities. One example was how someone sitting and drinking coffee might receive a phone call and leave the coffee shop, by showing how the device would animate a troll-like creature at first sitting and then walking while talking on a cell phone.


A new “smart materials” process — Multiple Memory Material Technology — developed by University of Waterloo engineering researchers promises to revolutionize the manufacture of diverse products such as medical devices, microelectromechanical systems (MEMS), printers, hard drives, automotive components, valves and actuators.

The breakthrough technology will provide engineers with much more freedom and creativity by enabling far greater functionality to be incorporated into medical devices such as stents, braces and hearing aids than is currently possible.

Smart materials, also known as  shape memory alloys, have been around for several decades and are well known for their ability to remember a pre-determined shape. Traditional memory materials remember one shape at one temperature and a second shape at a different temperature. Until now they have been limited to change shape at only one temperature. Now with the new Waterloo technology they can remember multiple different memories, each one with a different shape.

“This ground-breaking technology makes smart materials even smarter,” said Ibraheem Khan, a research engineer and graduate student working with Norman Zhou, a professor of mechanical and mechatronics engineering. “We have developed a technology that embeds several memories in a monolithic smart material. In essence, a single material can be programmed to remember more shapes, making it smarter than previous technologies.”


UCLA researchers have fabricated the fastest  graphene transistor to date, using a new fabrication process with a  nanowire as a self-aligned gate.

Self-aligned gates are a key element in modern transistors, which are semiconductor devices used to amplify and switch electronic signals.  Gates are used to switch the transistor between various states, and self-aligned gates were developed to deal with problems of misalignment encountered because of the shrinking scale of electronics.

“This new strategy overcomes two limitations previously encountered in graphene transistors,” professor of chemistry and biochemistry Xiangfeng Duan said. “First, it doesn’t produce any appreciable defects in the graphene during fabrication, so the high carrier mobility is retained. Second, by using a self-aligned approach with a nanowire as the gate, the group was able to overcome alignment difficulties previously encountered and fabricate very short-channel devices with unprecedented performance.”

These advances allowed the team to demonstrate the highest speed graphene transistors to date, with a cutoff frequency up to 300 GHz — comparable to the very best transistors from high-electron mobility materials such gallium arsenide or indium phosphide.

KURZWEILAI SAYS: Rice University scientists have created the first two-terminal memory chips that use only silicon to generate nanocrystal wires as small as 5 nanometers — far smaller than circuitry in even the most advanced computers and electronic devices. The technology breakthrough promises to extend the limits of miniaturization subject to Moore’s Law, and should be easily adaptable to nanoelectronic manufacturing techniques.


Intel’s scientists are creating detailed maps of the activity in the brain for individual words that can then be matched against the brain activity of someone using the computer, allowing the machine to determine the word they are thinking.

The technology currently uses fMRI, but work is under way to produce smaller devices that can be worn as headsets.

Justin Ratner, the company’s chief technology officer, said: “Mind reading is the ultimate user interface. There will be concerns about privacy with this sort of thing and we will have to overcome them. What is clear though is that humans are not restricted any more to just using keyboards and mice.”


Microsoft researchers have created a mobile device that could forge a trail of “digital bread crumbs.”

The device would collect trail data while the user walked indoors, underground, or in other spaces where GPS signals are unavailable or weak–such as multilevel parking garages that can baffle people who forget where they parked.

A prototype phone called Menlo has an accelerometer to detect movement, a side-mounted compass to determine direction, and a barometric pressure sensor to track changes in altitude.

An app called Greenfield counts a user’s sequence of steps, gauges direction changes, and even calculates how many floors the user has traversed by stairs or an elevator. The app stores the trail data so that a user can later retrace their path precisely.

Greenfield could be used to rescue hikers and mountain climbers, for new kinds of urban street games, or to recover lost items and find friends at a stadium.


Biometrics R&D firm Global Rainmakers Inc. (GRI) is rolling out its iris scanning technology to create what it calls “the most secure city in the world.” In a partnership with Leon — one of the largest cities in Mexico, with a population of more than a million — GRI will fill the city with eye-scanners. That will help law enforcement revolutionize the way we live — not to mention marketers.

“In the future, whether it’s entering your home, opening your car, entering your workspace, getting a pharmacy prescription refilled, or having your medical records pulled up, everything will come off that unique key that is your iris,” says Jeff Carter, CDO of Global Rainmakers. Before coming to GRI, Carter headed a think tank partnership between Bank of America, Harvard, and MIT. “Every person, place, and thing on this planet will be connected [to the iris system] within the next 10 years,” he says.


Researchers at the Computer Vision Lab at ETH Zurich have developed a method to produce virtual copies of real objects that can be touched and sent via the Internet.

A 3D scanner records an image of the object.  A sensor arm with force, acceleration, and slip sensors then collects information about shape and solidity, and  a virtual copy is created on the computer. A user senses the object using a haptic (touch) device and views it with data goggles.

The sensor rod which is equipped with small motors. A computer program calculates when the virtual object and the sensor rod meet, and then sends a signal to the motors in the rod, with force feedback.

In a future “Beaming” project, the researchers plan to project images of people at remote locations for virtual meetings.


Google has launched a service that could bring machine learning to many more apps. Google Prediction API provides a simple way for developers to create software that learns how to handle incoming data.

For example, the Google-hosted algorithms could be trained to sort e-mails into categories for “complaints” and “praise” using a dataset that provides many examples of both kinds. Future e-mails could then be screened by software using that API, and handled accordingly.

Google’s service provides a kind of machine-learning black box– data goes in one end, and predictions come out the other. There are three basic commands: one to upload a collection of data, another telling the service to learn what it can from it, and a third to submit new data for the system to react to based on what it learned.


“Five years from now on the web for free you’ll be able to find the best lectures in the world,” says Bill Gates. “It will be better than any single university.”

He believes the $50,000 a year university education could be done via the web for as little as $2,000.


A study by USC scientists of the brain connections in a small area of the rat brain showed them as patterns of circular loops, suggesting that at least in this part of the rat brain, the wiring diagram looks like a distributed network, rather than a hierarchy, the traditional view.

“We started in one place and looked at the connections. It led into a very complicated series of loops and circuits. It’s not an organizational chart. There’s no top and bottom to it,” said Larry W. Swanson, a member of the National Academy of Sciences and a professor of biological sciences at USC College. The study was published in Proceedings of the National Academy of Sciences August 9, 2010.

The circuit tracing method allows the study of incoming and outgoing signals from any two brain centers. It was invented and refined by USC neuroscientist Richard H. Thompson over eight years. Thompson is a research assistant professor of biological sciences at the College. Most other tracing studies at present focus only on one signal, in one direction, at one location.

“[We] can look at up to four links in a circuit, in the same animal at the same time. That was our technical innovation,” Swanson said.

The Internet model would explain the brain’s ability to overcome much local damage, he said. “You can knock out almost any single part of the Internet and the rest of it works.”Likewise, Swanson said, “There are usually alternate pathways through the nervous system. It’s very hard to say that any one part is absolutely essential.”

Swanson first argued for the distributed model of the brain in his book Brain Architecture: Understanding the Basic Plan (Oxford University Press, 2003). The PNAS study appears to support his view.

“There is an alternate model. It’s not proven, but let’s rethink the traditional way of regarding how the brain works,” he said. “The part of the brain you think with, the cortex, is very important, but it’s certainly not the only part of the nervous system that determines our behavior.”

The research described in the PNAS study was supported by the National Institute of Neurological Disorders and Stroke in the National Institutes of Health.


A confidential, seven-page Google Inc. “vision statement” shows Google in a deep round of soul-searching over a basic question: How far should it go in profiting from the vast trove of data it possesses about people’s activities by audience targeting without violating privacy?

The memo outlined a sweeping vision in which Google could get other websites from around the Internet to share their data with it for the purpose of targeting ads.

PCMAG SAYS: Google on Thursday introduced the next generation of interaction, running on its Android operating system: voice-driven actions.

Google’s “Voice Search” app includes 12 new “Voice Actions for Android,” including phone calls, reminder e-mails, direction search, and music search. A second improvement, “Chrome to Phone,” allows users to click on a new “mobile phone” icon to send links, YouTube videos, even directions, to the phone.


University of Sydney researchers have developed the first non-invasive method of stimulating the brain that can boost visual memory. It uses transcranial direct current stimulation (tDCS), in which weak electrical currents are applied to the scalp using electrodes.

The method can temporarily increase or decrease activity in the anterior temporal lobe (ATL), an area near the temple, and has already been shown to boost verbal and motor skills in volunteers.

Interesting article on artificial intelligence modelled on the behaviour of insects (I talk about such things in essays like ‘It’s Alive’)


Scientists at the Georgia Institute of Technology have created nanoscale logic circuits that can be used as the basis of nanoscale robotics and processors.

The method uses Zinc oxide that has been formed as a nanowire. A voltage generated by mechanical motion induces a voltage by the piezoelectric effect which then modifies the current flowing in the wire (which is also a semiconductor). This effectively creates a tiny transistor that can be gated (open or shut, with electricity either flowing or not) by the strain applied to the nanowire.

While slower than conventional (CMOS) logic circuits, the nanoscale logic circuits are effective for low-frequency (slower) applications, including nanorobotics, transducers, micromachines, human-computer interfacing, and microfluidics.


Every two days now we create as much information as we did from the dawn of civilization up until  2003, according to Google CEO Eric Schmidt — about five exabytes of data.

He cautioned that just because companies like his can do all sorts of things with this information, the more pressing question now is if they should. Schmidt noted that while technology is neutral, he doesn’t believe people are ready for what’s coming.

I spend most of my time assuming the world is not ready for the technology revolution that will be happening to them soon,” Schmidt said. (I have talked about the promise and peril of improved search engines and massive amounts of info in essays like ‘Google and Red Queen’).


Michigan State University (MSU) researchers have developed “digital organisms” called Avidians that were made to evolve memory, and could eventually be used to generate intelligent artificial life and evolve into symmetrical, organized artificial brains that share structural properties with real brains.

MSU researcher Jeff Clune works with a system called HyperNEAT, which uses principles of developmental biology to grow a large number of digital neurons from a small number of instructions.He translated the artificial neurons into code that could control a Roomba robot.

You can build complex brains from a relatively small number of computerized instructions, or “genes,” he says. Their brains have millions of connections, yet still perform a task well, and that number could be pushed higher yet. “This is a sea change for the field. Being able to evolve functional brains at this scale allows us to begin pushing the capabilities of artificial neural networks up, and opens up a path to evolving artificial brains that rival their natural counterparts.” (I have talked about artificial evolution in several essays)

This comes from  ‘A small change to the theory of gravity implies that our universe inherited its arrow of time from the black hole in which it was born. “Accordingly, our own Universe may be the interior of a black hole existing in another universe.” So concludes Nikodem Poplawski at Indiana University in a remarkable paper about the nature of space and the origin of time’. (I talked about universes being born in black holes in ‘a message from the creators’).

From NEW YORK TIMES: ‘Computer scientists are developing highly programmed machines that can engage people and teach them simple skills, including household tasks, vocabulary or, as in the case of the boy, playing, elementary imitation and taking turns.

The most advanced models are fully autonomous, guided by artificial intelligence software like motion tracking and speech recognition, which can make them just engaging enough to rival humans at some teaching tasks.

Researchers say the pace of innovation is such that these machines should begin to learn as they teach, becoming the sort of infinitely patient, highly informed instructors that would be effective in subjects like foreign language or in repetitive therapies used to treat developmental problems like autism’. (R+D such as this might be applied to our avatars, so they become more like people in their own right, increasingly capable at collaborating with humans in achieving goals).

ON DIGITAL PEOPLE: “Someday, those who lifelog will be able to create avatars that do an amazing job of impersonating them…Your lifelog will have all the details of how you sound, the phrases you employ, questions you have answered, and facts about your life. It will contain recordings of you under stress and relaxed, pleased and annoyed, in triumph and in defeat. It will know your favourite quips and mottos…Two-way immortality- the ability to actually interact with an avatar that responds just like you would”- Ian Bell.

“Like it or not, most of us already have a digital identity of one form or other. This may be just a photo or, like a Facebook profile, it might be packed with personal details. For many, our digital self has become a handy tool to communicate, network, blog, to buy and sell or to play games online – a useful extension to the real world. But as we leave increasing amounts of personal data behind on the net, the online world could give birth to a digital persona.

The idea would be to capture an individual’s personality by uploading personal material and memories and then fuse all this with information harvested online, creating a truly lifelike avatar…

…For now, avatar technology remains primitive. But the agricultural revolution saw a vast expansion of what human beings could accomplish together and the industrial revolution saw power shift from rural nobility to urban business. The digital identity revolution in its turn could transform how people think about themselves, their life, and their neighbours. The rise of the avatar could change our ideas about what it means to be human”.- New Scientist.

“one of the biggest applications we see in the intelligent virtual agents context is the digital twin…an AI-powered avatar to act in virtual worlds on one’s behalf—embodying one’s ideas and preferences, and making a reasonable emulation of the decisions one would make…The best way to achieve this would be to scan the human brain and extract the relevant information—but potentially, one could give enough information about oneself to a digital ditto that it could effectively replicate one’s state of mind, via simply supplying it with text to read, answers to various questions, and video and audio of oneself going about life”- Ben Goertzel.

ON DIGITAL ASSISTANTS: A report in Technology Review on mobile phone apps by a company called Vlingo. “Called “SuperDialer,” the service can, for example, let a user say “Call pizza” and subsequently see a list of nearby pizza places drawn from both the user’s address book and the Web….A separate service in the works, Vlingo Answers, would respond if a user asked a specific question such as “How old is Kiefer Sutherland?” Vlingo would try to get the answers from standard Web search results and scans of specialty information sites such as Wolfram Alpha and True Knowledge”.

Future, more capable versions of this technology, if integrated with our avatars, could see them evolve from mere tools for communication into people who collaborate with us, something I talked about in ‘Google and the Red Queen.

“Minority Report” style digital advertising billboards being trialled in Japan have cameras that read the gender and age group of people looking at them to tailor their commercial messages.

The technology uses face recognition software to glean the gender and age group of passers-by, but operators have promised they will save no recorded images, only the collated data about groups of people.’ (In Google And Red Queen I talked about bioinformatics and ‘neuromarketing’ (essentially, understanding how the mind filters incoming info) being used to tailor knowledge and advertising to target groups and individuals).

This entry was posted in They say I am right.. Bookmark the permalink.

2 Responses to THEY SAY I AM RIGHT.

  1. Seren Seraph says:

    I don’t agree with Bell. Given all that external data it is possible to create a simulacrum that sort of sounds/acts like me. But as it does not have one bit of my brain structure and connections it is incapable of any but very rough analogs to how I would react to new situations. It is not at all able to evolve over time. The engine/AI running it may but it is missing my own internal wiring to great extent.

    I am a manifestation of my primary in a different domain with different degrees of freedom and different choices of what is more or less strongly accented and developed. I would not be me but another if someone else brought up this avatar and ran the controls. It will be a great day when I can be more autonomous and more, if you will, “ensouled”, with my own mind stack. But that will take a lot more than good largely externalized recordings.

  2. ON DIGITAL ASSISTANTS (see posts like ‘the singularity and the web’ and ‘Google And Red Queen’ at

    ‘SRI International is hoping to bring the concept of virtual personal assistants closer to reality.

    Recently, the institute has set its sights on the mobile phone and Web market, especially on creating applications that perform personal functions.

    SRI’s newest venture: a Web-based personalized news feed, Chattertrap, that monitors what people are reading to learn what they like, and then serves up articles and links that suit their interests.

    Another recent project is a mobile application, Siri, that allows people to perform Web searches by voice on a cellphone, and was acquired by Apple’- NEW YORK TIMES.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s