Monday, February 28, 2011

Running on a Faster Track: Researchers Develop Scheduling Tool to Save Time on Public Transport

Dr. Tal Raviv and his graduate student Mor Kaspi of Tel Aviv University's Department of Industrial Engineering in the Iby and Aladar Fleischman Faculty of Engineering have developed a tool that makes passenger train journeys shorter, especially when transfers are involved -- a computer-based system to shave precious travel minutes off a passenger's journey.

Dr. Raviv's solution, the"Service Oriented Timetable," relies on computers and complicated algorithms to do the scheduling."Our solution is useful for any metropolitan region where passengers are transferring from one train to another, and where train service providers need to ensure that the highest number of travellers can make it from Point A to Point B as quickly as possible," says Dr. Raviv.

Saves time and resources

In the recent economic downturn, more people are seeking to scale back their monthly transportation costs. Public transportation is a win-win -- good for both the bank account and the environment. But when travel routes are complicated by transfers, it becomes a hard job to manage who can wait -- and who can't -- between trains.

Another factor is consumer preference. Ideally, each passenger would like a direct train to his destination, with no stops en route. But passengers with different itineraries must compete for the system's resources. Adding a stop at a certain station will improve service for passengers for whom the station is the final destination, but will cause a delay for passengers who are only passing through it. The question is how to devise a schedule which is fair for everyone. What are the decisions that will improve the overall condition of passengers in the train system?

It's not about adding more resources to the system, but more intelligently managing what's already there, Dr. Raviv explains.

More time on the train, less time on the platform

In their train timetabling system, Dr. Raviv and Kaspi study the timetables to find places in the train scheduling system that can be optimized so passengers make it to their final destination faster.

Traditionally, train planners looked for solutions based on the frequency of trains passing through certain stops. Dr. Raviv and Kaspi, however, are developing a high-tech solution for scheduling trains that considers the total travel time of passengers, including their waiting time at transfer stations.

"Let's say you commute to Manhattan from New Jersey every day. We can find a way to synchronize trains to minimize the average travel time of passengers," says Dr. Raviv."That will make people working in New York a lot happier."

The project has already been simulated on the Israel Railway, reducing the average travel time per commuter from 60 to 48 minutes. The tool can be most useful in countries and cities, he notes, where train schedules are robust and very complicated.

The researchers won a competition of the Railway Application Section of the International Institute for Operation Research and Management Science (INFORMS) last November for their computer program that optimizes a refuelling schedule for freight trains. Dr. Raviv also works on optimizing other forms of public transport, including the bike-sharing programs found in over 400 cities around the world today.


Source

Sunday, February 27, 2011

World's Smallest Magnetic Field Sensor: Researchers Explore Using Organic Molecules as Electronic Components

For the first time, a team of scientists from KIT and the Institut de Physique et Chimie des Matériaux de Strasbourg (IPCMS) have now succeeded in combining the concepts of spin electronics and molecular electronics in a single component consisting of a single molecule. Components based on this principle have a special potential, as they allow for the production of very small and highly efficient magnetic field sensors for read heads in hard disks or for non-volatile memories in order to further increase reading speed and data density.

Use of organic molecules as electronic components is being investigated extensively at the moment. Miniaturization is associated with the problem of the information being encoded with the help of the charge of the electron (current on or off). However, this requires a relatively high amount of energy. In spin electronics, the information is encoded in the intrinsic rotation of the electron, the spin. The advantage is that the spin is maintained even when switching off current supply, which means that the component can store information without any energy consumption.

The German-French research team has now combined these concepts. The organic molecule H2-phthalocyanin that is also used as blue dye in ball pens exhibits a strong dependence of its resistance, if it is trapped between spin-polarized, i.e. magnetic electrodes. This effect was first observed in purely metal contacts by Albert Fert and Peter Grünberg. It is referred to as giant magnetoresistance and was acknowledged by the Nobel Prize for Physics in 2007.

The giant magnetoresistance effect on single molecules was demonstrated at KIT within the framework of a combined experimental and theoretical project of CFN and a German-French graduate school in cooperation with the IPCMS, Strasbourg. The results of the scientists are now presented in the journalNature Nanotechnology.

Karlsruhe Institute of Technology (KIT) is a public corporation and state institution of Baden-Wuerttemberg, Germany. It fulfills the mission of a university and the mission of a national research center of the Helmholtz Association. KIT focuses on a knowledge triangle that links the tasks of research, teaching, and innovation.


Source

Saturday, February 26, 2011

Atomic Antennae Transmit Quantum Information Across a Microchip

The researchers have published their work in the scientific journalNature.

Six years ago scientists at the University of Innsbruck realized the first quantum byte -- a quantum computer with eight entangled quantum particles; a record that still stands."Nevertheless, to make practical use of a quantum computer that performs calculations, we need a lot more quantum bits," says Prof. Rainer Blatt, who, with his research team at the Institute for Experimental Physics, created the first quantum byte in an electromagnetic ion trap."In these traps we cannot string together large numbers of ions and control them simultaneously."

To solve this problem, the scientists have started to design a quantum computer based on a system of many small registers, which have to be linked. To achieve this, Innsbruck quantum physicists have now developed a revolutionary approach based on a concept formulated by theoretical physicists Ignacio Cirac and Peter Zoller. In their experiment, the physicists electromagnetically coupled two groups of ions over a distance of about 50 micrometers. Here, the motion of the particles serves as an antenna."The particles oscillate like electrons in the poles of a TV antenna and thereby generate an electromagnetic field," explains Blatt."If one antenna is tuned to the other one, the receiving end picks up the signal of the sender, which results in coupling." The energy exchange taking place in this process could be the basis for fundamental computing operations of a quantum computer.

Antennae amplify transmission

"We implemented this new concept in a very simple way," explains Rainer Blatt. In a miniaturized ion trap a double-well potential was created, trapping the calcium ions. The two wells were separated by 54 micrometers."By applying a voltage to the electrodes of the ion trap, we were able to match the oscillation frequencies of the ions," says Blatt.

"This resulted in a coupling process and an energy exchange, which can be used to transmit quantum information." A direct coupling of two mechanical oscillations at the quantum level has never been demonstrated before. In addition, the scientists show that the coupling is amplified by using more ions in each well."These additional ions function as antennae and increase the distance and speed of the transmission," says Rainer Blatt, who is excited about the new concept. This work constitutes a promising approach for building a fully functioning quantum computer.

"The new technology offers the possibility to distribute entanglement. At the same time, we are able to target each memory cell individually," explains Rainer Blatt. The new quantum computer could be based on a chip with many micro traps, where ions communicate with each other through electromagnetic coupling. This new approach represents an important step towards practical quantum technologies for information processing.

The quantum researchers are supported by the Austrian Science Fund FWF, the European Union, the European Research Council and the Federation of Austrian Industries Tyrol.


Source

Friday, February 25, 2011

A Semantic Sommelier: Wine Application Highlights the Power of Web 3.0

Web scientist and Rensselaer Polytechnic Institute Tetherless World Research Constellation Professor Deborah McGuinness has been developing a family of applications for the most tech-savvy wine connoisseurs since her days as a graduate student in the 1980s -- before what we now know as the World Wide Web had even been envisioned.

Today, McGuinness is among the world's foremost experts in Web ontology languages. These languages are used to encode meanings in a language that computers can understand. The most recent version of her wine application serves as an exceptional example of what the future of the World Wide Web, often called Web 3.0, might in fact look like. It is also an exceptional tool for teaching future Web Scientists about ontologies.

"The wine agent came about because I had to demonstrate the new technology that I was developing," McGuinness said."I had sophisticated applications that used cutting-edge artificial intelligence technology in domains, such as telecommunications equipment, that were difficult for anyone other than well-trained engineers to understand." McGuinness took the technology into the domain of wines and foods to create a program that she uses as a semantic tutorial, an"Ontologies 101" as she calls it. And students throughout the years have done many things with the wine agent including, most recently, experimentation with social media and mobile phone applications.

Today, the semantic sommelier is set to provide even the most novice of foodies some exciting new tools to expand their wine knowledge and food-pairing abilities on everything from their home PC to their smart phone. Evan Patton, a graduate student in computer science at Rensselaer, is the most recent student to tinker with the wine agent and is working with McGuinness to bring it into the mobile space on both the iPhone and Droid platforms.

The agent uses the Web Ontology Language (OWL), the formal language for the Semantic Web. Like the English language, which uses an agreed upon alphabet to form words and sentences that all English-speaking people can recognize, OWL uses a formalized set of symbols to create a code or language that a wide variety of applications can"read." This allows your computer to operate more efficiently and more intelligently with your cell phone or your Facebook page, or any other webpage or web-enabled device. These semantics also allow for an entirely new generation in smart search technologies.

Thanks to its semantic technology, the sommelier is input with basic background knowledge about wine and food. For wine, that includes its body, color (red versus white or blush), sweetness, and flavor. For food, this includes the course (e.g. appetizer versus entrée), ingredient type (e.g. fish versus meat), and its heat (mild versus spicy). The semantic technologies beneath the application then encode that knowledge and apply reasoning to search and share that information. This semantic functionality can now be exploited for a variety of culinary purposes, all of which McGuinness, a personal lover of fine wines, and Patton are working together on.

Having a spicy fish dish for dinner? Search within the system and it will arrive at a good wine pairing for the meal. Beyond basic pairings, the application has strong possibilities for use in individual restaurants, according to McGuinness, who envisions teaming up with restaurant owners to input their specific menus and wine lists. Thus, a diner could check menus and wine holdings before going out for dinner or they could enter a restaurant, pull out their smart phone, and instantly know what is in the wine cellar and goes best with that chef's clams casino. Beyond pairings, diners could rate different wines, providing fellow diners with personal reviews and the restaurateur with valuable information on what to stock up on next week. Is it a dry restaurant? The application could also be loaded up with the inventory within the liquor store down the street.

Beyond the table, the application can also be used to make personal wine suggestions and virtual wine cellars that you could share with your friends via Facebook or other social media platforms. It could also be used to manage a personal wine cellar, providing information on what is a peak flavor at the moment or what in your cellar would go best with your famous steak au poivre.

"Today we have 10 gadgets with us at any given time," McGuinness said."We live and breathe social media. With semantic technologies, we can offload more of the searching and reasoning required to locate and share information to the computer while still maintaining personal control over our information and how we use it. We also increase the ability of our technologies to interact with each other and decrease the need for as many gadgets or as many interactions with them since the applications do more work for us."


Source

Thursday, February 24, 2011

Quantum Simulator Becomes Accessible to the World

The researchers have published their work in the scientific journalNature.

Many phenomena in our world are based on the nature of quantum physics: the structure of atoms and molecules, chemical reactions, material properties, magnetism and possibly also certain biological processes. Since the complexity of phenomena increases exponentially with more quantum particles involved, a detailed study of these complex systems reaches its limits quickly; and conventional computers fail when calculating these problems. To overcome these difficulties, physicists have been developing quantum simulators on various platforms, such as neutral atoms, ions or solid-state systems, which, similar to quantum computers, utilize the particular nature of quantum physics to control this complexity.

In another breakthrough in this field, a team of young scientists in the research groups of Rainer Blatt and Peter Zoller at the Institute for Experimental Physics and Theoretical Physics of the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences have been the first to engineer a comprehensive toolbox for an open-system quantum computer, which will enable researchers to construct more sophisticated quantum simulators for investigating complex problems in quantum physics.

Using controlled dissipation

The physicists use a natural phenomenon In their experiments that they usually try to minimize as much as possible: environmental disturbances. Such disturbances usually cause information loss in quantum systems and destroy fragile quantum effects such as entanglement or interference. In physics this deleterious process is called dissipation. Innsbruck researchers, led by experimental physicists Julio Barreiro and Philipp Schindler as well as the theorist Markus Müller, have now been first in using dissipation in a quantum simulator with trapped ions in a beneficial way and engineered system-environment coupling experimentally.

"We not only control all internal states of the quantum system consisting of up to four ions but also the coupling to the environment," explains Julio Barreiro."In our experiment we use an additional ion that interacts with the quantum system and, at the same time, establishes a controlled contact to the environment," explains Philipp Schindler. The surprising result is that by using dissipation, the researchers are able to generate and intensify quantum effects, such as entanglement, in the system."We achieved this by controlling the disruptive environment," says an excited Markus Müller.

Putting the quantum world into order

In one of their experiments the researchers demonstrate the control of dissipative dynamics by entangling four ions using the environment ion."Contrary to other common procedures this also works irrespective of the initial state of each particle," explains Müller."Through a collective cooling process, the particles are driven to a common state." This procedure can be used to prepare many-body states, which otherwise could only be created and observed in an extremely well isolated quantum system.

The beneficial use of an environment allows for the realization of new types of quantum dynamics and the investigation of systems that have scarcely been accessible for experiments until now. In the last few years there has been continuous thinking about how dissipation, instead of suppressing it, could be actively used as a resource for building quantum computers and quantum memories. Innsbruck theoretical and experimental physicists cooperated closely and they have now been the first to successfully implement these dissipative effects in a quantum simulator.

The Innsbruck researchers are supported by the Austrian Science Fund (FWF), the European Commission and the Federation of Austrian Industries Tyrol.


Source

Wednesday, February 23, 2011

Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era

And a compact radio that needs no tuning to find the right frequency could be a key enabler to organizing millimeter-scale systems into wireless sensor networks. These networks could one day track pollution, monitor structural integrity, perform surveillance, or make virtually any object smart and trackable.

Both developments at the University of Michigan are significant milestones in the march toward millimeter-scale computing, believed to be the next electronics frontier.

Researchers are presenting papers on each at the International Solid-State Circuits Conference (ISSCC) in San Francisco. The work is being led by three faculty members in the U-M Department of Electrical Engineering and Computer Science: professors Dennis Sylvester and David Blaauw, and assistant professor David Wentzloff.

Bell's Law and the promise of pervasive computing

Nearly invisible millimeter-scale systems could enable ubiquitous computing, and the researchers say that's the future of the industry. They point to Bell's Law, a corollary to Moore's Law. (Moore's says that the number of transistors on an integrated circuit doubles every two years, roughly doubling processing power.)

Bell's Law says there's a new class of smaller, cheaper computers about every decade. With each new class, the volume shrinks by two orders of magnitude and the number of systems per person increases. The law has held from 1960s' mainframes through the '80s' personal computers, the '90s' notebooks and the new millennium's smart phones.

"When you get smaller than hand-held devices, you turn to these monitoring devices," Blaauw said."The next big challenge is to achieve millimeter-scale systems, which have a host of new applications for monitoring our bodies, our environment and our buildings. Because they're so small, you could manufacture hundreds of thousands on one wafer. There could be 10s to 100s of them per person and it's this per capita increase that fuels the semiconductor industry's growth."

The first complete millimeter-scale system

Blaauw and Sylvester's new system is targeted toward medical applications. The work they present at ISSCC focuses on a pressure monitor designed to be implanted in the eye to conveniently and continuously track the progress of glaucoma, a potentially blinding disease. (The device is expected to be commercially available several years from now.)

In a package that's just over 1 cubic millimeter, the system fits an ultra low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell and a wireless radio with an antenna that can transmit data to an external reader device that would be held near the eye.

"This is the first true millimeter-scale complete computing system," Sylvester said.

"Our work is unique in the sense that we're thinking about complete systems in which all the components are low-power and fit on the chip. We can collect data, store it and transmit it. The applications for systems of this size are endless."

The processor in the eye pressure monitor is the third generation of the researchers' Phoenix chip, which uses a unique power gating architecture and an extreme sleep mode to achieve ultra-low power consumption. The newest system wakes every 15 minutes to take measurements and consumes an average of 5.3 nanowatts. To keep the battery charged, it requires exposure to 10 hours of indoor light each day or 1.5 hours of sunlight. It can store up to a week's worth of information.

While this system is miniscule and complete, its radio doesn't equip it to talk to other devices like it. That's an important feature for any system targeted toward wireless sensor networks.

A unique compact radio to enable wireless sensor networks

Wentzloff and doctoral student Kuo-Ken Huang have taken a step toward enabling such node-to-node communication. They've developed a consolidated radio with an on-chip antenna that doesn't need the bulky external crystal that engineers rely on today when two isolated devices need to talk to each other. The crystal reference keeps time and selects a radio frequency band. Integrating the antenna and eliminating this crystal significantly shrinks the radio system. Wentzloff's is less than 1 cubic millimeter in size.

He and Huang's key innovation is to engineer the new antenna to keep time on its own and serve as its own reference. By integrating the antenna through an advanced CMOS process, they can precisely control its shape and size and therefore how it oscillates in response to electrical signals.

"Antennas have a natural resonant frequency for electrical signals that is defined by their geometry, much like a pure audio tone on a tuning fork," Wentzloff said."By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna's natural resonance, we can lock the transmitted signal to the antenna's resonant frequency."

"This is the first integrated antenna that also serves as its own reference. The radio on our chip doesn't need external tuning. Once you deploy a network of these, they'll automatically align at the same frequency."

The researchers are now working on lowering the radio's power consumption so that it's compatible with millimeter-scale batteries.

Greg Chen, a doctoral student in the Department of Electrical Engineering and Computer Science, presents"A Cubic-Millimeter Energy-Autonomous Wireless Intraocular Pressure Monitor." The researchers are collaborating with Ken Wise, the William Gould Dow Distinguished University Professor of Electrical Engineering and Computer Science on the packaging of the sensor, and with Paul Lichter, chair of the Department of Ophthalmology and Visual Sciences at the U-M Medical School, for the implantation studies. Huang presents"A 60GHz Antenna-Referenced Frequency-Locked Loop in 0.13μm CMOS for Wireless Sensor Networks." This research is funded by the National Science Foundation. The university is pursuing patent protection for the intellectual property, and is seeking commercialization partners to help bring the technology to market.


Source

Tuesday, February 22, 2011

'Fingerprints' Match Molecular Simulations With Reality

ORNL's Jeremy Smith collaborated on devising a method -- dynamical fingerprints --that reconciles the different signals between experiments and computer simulations to strengthen analyses of molecules in motion. The research will be published in theProceedings of the National Academy of Sciences.

"Experiments tend to produce relatively simple and smooth-looking signals, as they only 'see' a molecule's motions at low resolution," said Smith, who directs ORNL's Center for Molecular Biophysics and holds a Governor's Chair at the University of Tennessee."In contrast, data from a supercomputer simulation are complex and difficult to analyze, as the atoms move around in the simulation in a multitude of jumps, wiggles and jiggles. How to reconcile these different views of the same phenomenon has been a long-standing problem."

The new method solves the problem by calculating peaks within the simulated and experimental data, creating distinct"dynamical fingerprints." The technique, conceived by Smith's former graduate student Frank Noe, now at the Free University of Berlin, can then link the two datasets.

Supercomputer simulations and modeling capabilities can add a layer of complexity missing from many types of molecular experiments.

"When we started the research, we had hoped to find a way to use computer simulation to tell us which molecular motions the experiment actually sees," Smith said."When we were finished we got much more -- a method that could also tell us which other experiments should be done to see all the other motions present in the simulation. This method should allow major facilities like the ORNL's Spallation Neutron Source to be used more efficiently."

Combining the power of simulations and experiments will help researchers tackle scientific challenges in areas like biofuels, drug development, materials design and fundamental biological processes, which require a thorough understanding of how molecules move and interact.

"Many important things in science depend on atoms and molecules moving," Smith said."We want to create movies of molecules in motion and check experimentally if these motions are actually happening."

"The aim is to seamlessly integrate supercomputing with the Spallation Neutron Source so as to make full use of the major facilities we have here at ORNL for bioenergy and materials science development," Smith said.

The collaborative work included researchers from L'Aquila, Italy, Wuerzburg and Bielefeld, Germany, and the University of California at Berkeley. The research was funded in part by a Scientific Discovery through Advanced Computing grant from the DOE Office of Science.


Source

Monday, February 21, 2011

Brain-Machine Interfaces Make Gains by Learning About Their Users, Letting Them Rest, and Allowing for Multitasking

In a typical brain-computer interface (BCI) set-up, users can send one of three commands -- left, right, or no-command. No-command is the static state between left and right and is necessary for a brain-powered wheelchair to continue going straight, for example, or to stay put in front of a specific target. But it turns out that no-command is very taxing to maintain and requires extreme concentration. After about an hour, most users are spent. Not much help if you need to maneuver that wheelchair through an airport.

In an ongoing study demonstrated by Millán and doctoral student Michele Tavella at the AAAS 2011 Annual Meeting in Washington, D.C., the scientists hook volunteers up to BCI and ask them to read, speak, or read aloud while delivering as many left and right commands as possible or delivering a no-command. By using statistical analysis programmed by the scientists, Millán's BCI can distinguish between left and right commands and learn when each subject is sending one of these versus a no-command. In other words, the machine learns to read the subject's mental intention. The result is that users can mentally relax and also execute secondary tasks while controlling the BCI.

The so-called Shared Control approach to facilitating human-robot interactions employs image sensors and image-processing to avoid obstacles. According to Millán, however, Shared Control isn't enough to let an operator to rest or concentrate on more than one command at once, limiting long-term use.

Millán's new work complements research on Shared Control and makes multitasking a reality while at the same time allows users to catch a break. His trick is in decoding the signals coming from EEG readings on the scalp -- readings that represent the activity of millions of neurons and have notoriously low resolution. By incorporating statistical analysis, or probability theory, his BCI allows for both targeted control -- maneuvering around an obstacle -- and more precise tasks, such as staying on a target. It also makes it easier to give simple commands like"go straight" that need to be executed over longer periods of time (think back to that airport) without having to focus on giving the same command over and over again.

It will be a while before this cutting-edge technology makes the move from lab to production line, but Millán's prototypes are the first working models of their kind to use probability theory to make BCIs easier to use over time. His next step is to combine this new level of sophistication with Shared Control in an ongoing effort to take BCI to the next level, necessary for widespread use. Further advancements, such as finer grained interpretation of cognitive information, are being developed in collaboration with the European project for Tools for Brain Computer (www.tobi.com). The multinational project is headed by Professor Millán and has moved into the clinical testing phase for several BCIs.


Source

Sunday, February 20, 2011

Scientists Steer Car With the Power of Thought

They then succeeded in developing an interface to connect the sensors to their otherwise purely computer-controlled vehicle, so that it can now be"controlled" via thoughts. Driving by thought control was tested on the site of the former Tempelhof Airport.

The scientists from Freie Universität first used the sensors for measuring brain waves in such a way that a person can move a virtual cube in different directions with the power of his or her thoughts. The test subject thinks of four situations that are associated with driving, for example,"turn left" or"accelerate." In this way the person trained the computer to interpret bioelectrical wave patterns emitted from his or her brain and to link them to a command that could later be used to control the car. The computer scientists connected the measuring device with the steering, accelerator, and brakes of a computer-controlled vehicle, which made it possible for the subject to influence the movement of the car just using his or her thoughts.

"In our test runs, a driver equipped with EEG sensors was able to control the car with no problem -- there was only a slight delay between the envisaged commands and the response of the car," said Prof. Raúl Rojas, who heads the AutoNOMOS project at Freie Universität Berlin. In a second test version, the car drove largely automatically, but via the EEG sensors the driver was able to determine the direction at intersections.

The AutoNOMOS Project at Freie Universität Berlin is studying the technology for the autonomous vehicles of the future. With the EEG experiments they investigate hybrid control approaches, i.e., those in which people work with machines.

The computer scientists have made a short film about their research, which is available at:http://tinyurl.com/BrainDriver


Source

Saturday, February 19, 2011

Augmented Reality System for Learning Chess

An ordinary webcam, a chess board, a set of 32 pieces and custom software are the key elements in the final degree project of the telecommunications engineering students Ivan Paquico and Cristina Palmero, from the UPC-Barcelona Tech's Terrassa School of Engineering (EET). The project, for which the students were awarded a distinction, was directed by the professor Jordi Voltas and completed during an international mobility placement in Finland.

The system created by Ivan Paquico, the 2001 Spanish Internet chess champion, and Cristina Palmero, a keen player and federation member, is a didactic tool that will help chess clubs and associations to teach the game and make it more appealing, particularly to younger players.

The system combines augmented reality, computer vision and artificial intelligence, and the only equipment required is a high-definition home webcam, the Augmented Reality Chess software, a standard board and pieces, and a set of cardboard markers the same size as the squares on the board, each marked with the first letter of the corresponding piece: R for the king (reiin Catalan), D for the queen (dama), T for the rooks (torres), A for the bishops (alfils), C for the knights (cavalls) and P for the pawns (peons).

Learning chess with virtual pieces

To use the system, learners play with an ordinary chess board but move the cardboard markers instead of standard pieces. The table is lit from above and the webcam focuses on the board, and every time the player moves one of the markers the system recognises the piece and reproduces the move in 3D on the computer screen, creating a virtual representation of the game.

For example, if the learner moves the marker P (pawn), the corresponding piece will be displayed on the screen in 3D, with all of the possible moves indicated. This is a simple and attractive way of showing novices the permitted movements of each piece, making the system particularly suitable for children learning the basics of this board game.

Making chess accessible to all

The learning tool also incorporates a move-tracking program called Chess Recognition: from the images captured by the webcam, the system instantly recognises and analyses every movement of every piece and can act as a referee, identify illegal moves and provide the players with an audible description of the game status. According to Ivan Paquico and Cristina Palmero, this feature could be very useful for players with visual impairment -- who have their own federation and, until now, have had to play with specially adapted boards and pieces -- and for clubs and federations, tournament organisers and enthusiasts of all levels.

The Chess Recognition program saves whole games so that they can be shared, broadcast online and viewed on demand, and can generate a complete user history for analysing the evolution of a player's game. The program also creates an automatic copy of the scoresheet (the official record of each game) for players to view or print.

The technology for playing chess and recording games online has been available for a number of years, but until now players needed sophisticated equipment including pieces with integrated chips and a special electronic board with a USB connection. The standard retail cost of this equipment is between 400 and 500 euros.


Source

Friday, February 18, 2011

Controlling a Computer With Thoughts?

The projects build upon ongoing research conducted in epilepsy patients who had the interfaces temporarily placed on their brains and were able to move cursors and play computer games, as well as in monkeys that through interfaces guided a robotic arm to feed themselves marshmallows and turn a doorknob.

"We are now ready to begin testing BCI technology in the patients who might benefit from it the most, namely those who have lost the ability to move their upper limbs due to a spinal cord injury," said Michael L. Boninger, M.D., director, UPMC Rehabilitation Institute, chair, Department of Physical Medicine and Rehabilitation, Pitt School of Medicine, and a senior scientist on both projects."It's particularly exciting for us to be able to test two types of interfaces within the brain."

"By expanding our research from the laboratory to clinical settings, we hope to gain a better understanding of how to train and motivate patients who will benefit from BCI technology," said Elizabeth Tyler-Kabara, M.D., Ph.D., a UPMC neurosurgeon and assistant professor of neurological surgery and bioengineering, Pitt Schools of Medicine and Engineering, and the lead surgeon on both projects.

In one project, funded by an$800,000 grant from the National Institutes of Health, a BCI based on electrocorticography (ECoG) will be placed on the motor cortex surface of a spinal cord injury patient's brain for up to 29 days. The neural activity picked up by the BCI will be translated through a computer processor, allowing the patient to learn to control computer cursors, virtual hands, computer games and assistive devices such as a prosthetic hand or a wheelchair.

The second project, funded by the Defense Advanced Research Projects Agency (DARPA) for up to$6 million over three years, is part of a program led by the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Md. It will further develop technology tested in monkeys by Andrew Schwartz, Ph.D., professor of neurobiology, Pitt School of Medicine, and also a senior investigator on both projects.

It uses an interface that is a tiny, 10-by-10 array of electrodes that is implanted on the surface of the brain to read activity from individual neurons. Those signals will be processed and relayed to maneuver a sophisticated prosthetic arm.

"Our animal studies have shown that we can interpret the messages the brain sends to make a simple robotic arm reach for an object and turn a mechanical wrist," Dr. Schwartz said."The next step is to see not only if we can make these techniques work for people, but also if we can make the movements more complex."

In the study, which is expected to begin by late 2011, participants will get two separate electrodes. In future research efforts, the technology may be enhanced with an innovative telemetry system that would allow wireless control of a prosthetic arm, as well as a sensory component.

"Our ultimate aim is to develop technologies that can give patients with physical disabilities control of assistive devices that will help restore their independence," Dr. Boninger said.


Source

Thursday, February 17, 2011

'Periodic Table of Shapes' to Give a New Dimension to Math

The three-year project should provide a resource that mathematicians, physicists and other scientists can use for calculations and research in a range of areas, including computer vision, number theory, and theoretical physics.

The researchers, from Imperial College London and institutions in Australia, Japan and Russia, are aiming to identify all the shapes across three, four and five dimensions that cannot be divided into other shapes.

As these building block shapes are revealed, the mathematicians will work out the equations that describe each shape and through this, they expect to develop a better understanding of the shapes' geometric properties and how different shapes are related to one another.

The work is funded by the Engineering and Physical Sciences Research Council, the Leverhulme Trust, the Royal Society and the European Research Council.

Project leader Professor Alessio Corti, from the Department of Mathematics at Imperial College London, explained:"The periodic table is one of the most important tools in chemistry. It lists the atoms from which everything else is made, and explains their chemical properties. Our work aims to do the same thing for three, four and five-dimensional shapes -- to create a directory that lists all the geometric building blocks and breaks down each one's properties using relatively simple equations. We think we may find vast numbers of these shapes, so you probably won't be able to stick our table on your wall, but we expect it to be a very useful tool."

The scientists will be analysing shapes that involve dimensions that cannot be 'seen' in a conventional sense in the physical world. In addition to the three dimensions of length, width and depth found in a three-dimensional shape, the scientists will explore shapes that involve other dimensions. For example, the space-time described by Einstein's Theory of Relativity has four dimensions -- the three spatial dimensions, plus time. String theorists believe that the universe is made up of many additional hidden dimensions that cannot be seen.

Professor Corti's colleague on the project, Dr Tom Coates, has created a computer modelling programme that should enable the researchers to pinpoint the basic building blocks for these multi-dimensional shapes from a pool of hundreds of millions of shapes. The researchers will be using this programme to identify shapes that can be defined by algebraic equations and that cannot be divided any further. They do not yet know how many such shapes there might be. The researchers calculate that there are around 500 million shapes that can be defined algebraically in four dimensions and they anticipate that they will find a few thousand building blocks from which all these shapes are made.

Dr Coates, from the Department of Mathematics at Imperial College London, added:"Most people are familiar with the idea of three-dimensional shapes, but for those who don't work in our field, it might be hard to get your head around the idea of shapes in four and five dimensions. However, understanding these kinds of shapes is really important for lots of aspects of science. If you are working in robotics, you might need to work out the equation for a five dimensional shape in order to figure out how to instruct a robot to look at an object and then move its arm to pick that object up. If you are a physicist, you might need to analyse the shapes of hidden dimensions in the universe in order to understand how sub-atomic particles work. We think the work that we're doing in our new project will ultimately help our colleagues in many different branches of science.

"In our project we are looking for the basic building blocks of shapes. You can think of these basic building blocks as 'atoms', and think of larger shapes as 'molecules.' The next challenge is to understand how properties of the larger shapes depend on the 'atoms' that they are made from. In other words, we want to build a theory of chemistry for shapes," added Dr Coates.

Dr Coates has recently won a prestigious Philip Leverhulme Prize worth£70,000 from the Leverhulme Trust, providing some of the funding for this project. Philip Leverhulme prizes are awarded to outstanding scholars under the age of 36 who have"made a substantial contribution to their particular field of study, recognised at an international level, and where the expectation is that their greatest achievement is yet to come."

To follow the research project in real time, visit the researchers' blog athttp://coates.ma.ic.ac.uk/fanosearch/or follow the team on Twitter athttp://twitter.com/fanosearch.


Source

Wednesday, February 16, 2011

US Secret Service Moves Tiny Town to Virtual Tiny Town: Teaching Secret Service Agents and Officers How to Prepare a Site Security Plan

Now, with help from the Department of Homeland Security (DHS) Science& Technology Directorate (S&T), the Secret Service is giving training scenarios a high-tech edge: moving from static tabletop models to virtual kiosks with gaming technology and 3D modeling.

For the past 40 years, a miniature model environment called"Tiny Town" has been one of the methods used to teach Secret Service agents and officers how to prepare a site security plan. The model includes different sites -- an airport, outdoor stadium, urban rally site and a hotel interior -- and uses scaled models of buildings, cars and security assets. The scenario-based training allows students to illustrate a dignitary's entire itinerary and accommodate unrelated, concurrent activities in a public venue. Various elements of a visit are covered, such as an arrival, rope line or public remarks. The class works as a whole and in small groups to develop and present their security plan.

Enter videogame technology. The Secret Service's James J. Rowley Training Center near Washington, D.C., sought to take these scenarios beyond a static environment to encompass the dynamic threat spectrum that exists today, while taking full advantage of the latest computer software technology.

The agency's Security and Incident Modeling Lab wanted to update Tiny Town and create a more relevant and flexible training tool. With funding from DHS S&T, the Secret Service developed the Site Security Planning Tool (SSPT), a new training system dubbed"Virtual Tiny Town" by instructors, with high-tech features:

  • 3D models and game-based virtual environments
  • Simulated chemical plume dispersion for making and assessing decisions
  • A touch interface to foster collaborative, interactive involvement by student teams
  • A means to devise, configure, and test a security plan that is simple, engaging, and flexible
  • Both third- and first-person viewing perspectives for overhead site evaluation and for a virtual"walk-through" of the site, reflecting how it would be performed in the field.

The new technology consists of three kiosks, each composed of a 55" Perceptive Pixel touch screen with an attached projector and camera, and a computer running Virtual Battle Space (VBS2) as the base simulation game. The kiosks can accommodate a team of up to four students, and each kiosk's synthetic environment, along with the team's crafted site security plan, can be displayed on a large wall-mounted LED 3D TV monitor for conducting class briefings and demonstrating simulated security challenges.

In addition to training new recruits, SSPT can also provide in-service protective details with advanced training on a range of scenarios, including preparation against chemical, biological or radiological attacks, armed assaults, suicide bombers and other threats.

Future enhancements to SSPT will include modeling the resulting health effects and crowd behaviors of a chemical, radiological or biological attack, to better prepare personnel for a more comprehensive array of scenarios and the necessary life-saving actions required to protect dignitaries and the public alike.

The Site Security Planning Tool development is expected to be completed and activated by spring 2011.


Source

Tuesday, February 15, 2011

New Wireless Technology Developed for Faster, More Efficient Networks

"Wireless communication is a one-way street. Over."

Radio traffic can flow in only one direction at a time on a specific frequency, hence the frequent use of"over" by pilots and air traffic controllers, walkie-talkie users and emergency personnel as they take turns speaking.

But now, Stanford researchers have developed the first wireless radios that can send and receive signals at the same time.

This immediately makes them twice as fast as existing technology, and with further tweaking will likely lead to even faster and more efficient networks in the future.

"Textbooks say you can't do it," said Philip Levis, assistant professor of computer science and of electrical engineering."The new system completely reworks our assumptions about how wireless networks can be designed," he said.

Cell phone networks allow users to talk and listen simultaneously, but they use a work-around that is expensive and requires careful planning, making the technique less feasible for other wireless networks, including Wi-Fi.

Sparked from a simple idea

A trio of electrical engineering graduate students, Jung Il Choi, Mayank Jain and Kannan Srinivasan, began working on a new approach when they came up with a seemingly simple idea. What if radios could do the same thing our brains do when we listen and talk simultaneously: screen out the sound of our own voice?

In most wireless networks, each device has to take turns speaking or listening."It's like two people shouting messages to each other at the same time," said Levis."If both people are shouting at the same time, neither of them will hear the other."

It took the students several months to figure out how to build the new radio, with help from Levis and Sachin Katti, assistant professor of computer science and of electrical engineering.

Their main roadblock to two-way simultaneous conversation was this: Incoming signals are overwhelmed by the radio's own transmissions, making it impossible to talk and listen at the same time.

"When a radio is transmitting, its own transmission is millions, billions of times stronger than anything else it might hear {from another radio}," Levis said."It's trying to hear a whisper while you yourself are shouting."

But, the researchers realized, if a radio receiver could filter out the signal from its own transmitter, weak incoming signals could be heard."You can make it so you don't hear your own shout and you can hear someone else's whisper," Levis said.

Their setup takes advantage of the fact that each radio knows exactly what it's transmitting, and hence what its receiver should filter out. The process is analogous to noise-canceling headphones.

When the researchers demonstrated their device last fall at MobiCom 2010, an international gathering of more than 500 of the world's top experts in mobile networking, they won the prize for best demonstration. Until then, people didn't believe sending and receiving signals simultaneously could be done, Jain said. Levis said a researcher even told the students their idea was"so simple and effective, it won't work," because something that obvious must have already been tried unsuccessfully.

Breakthrough for communications technology

But work it did, with major implications for future communications networks. The most obvious effect of sending and receiving signals simultaneously is that it instantly doubles the amount of information you can send, Levis said. That means much-improved home and office networks that are faster and less congested.

But Levis also sees the technology having larger impacts, such as overcoming a major problem with air traffic control communications. With current systems, if two aircraft try to call the control tower at the same time on the same frequency, neither will get through. Levis says these blocked transmissions have caused aircraft collisions, which the new system would help prevent.

The group has a provisional patent on the technology and is working to commercialize it. They are currently trying to increase both the strength of the transmissions and the distances over which they work. These improvements are necessary before the technology is practical for use in Wi-Fi networks.

But even more promising are the system's implications for future networks. Once hardware and software are built to take advantage of simultaneous two-way transmission,"there's no predicting the scope of the results," Levis said.


Source

Saturday, February 12, 2011

Researchers Predict Future of Electronic Devices, See Top Ten List of Expected Breakthroughs

The just-released February issue of the Journal of the Society for Information Display contains the first-ever critical review of current and future prospects for electronic paper functions -- in other words reviewing and critiquing the technologies that will bring us devices like

  • full-color, high-speed, low-power e-readers;
  • iPads that can be viewed in bright sunlight, or
  • e-readers and iPads so flexible that they can be rolled up and put in a pocket.

The University of Cincinnati's Jason Heikenfeld, associate professor of electrical and computer engineering and an internationally recognized researcher in the field of electrofluidics, is the lead author on the paper titled"A Critical Review of the Present and Future Prospects for Electronic Paper." Others contributing to the article are industry researcher Paul Drzaic of Drzaic Consulting Services; research scientist Jong-Souk (John) Yeo of Hewlett-Packard's Imaging and Printing Group; and research scientist Tim Koch, who currently manages Hewlett-Packard's effort to develop flexible electronics.

Based on this latest article and his ongoing research and development related to e-paper devices, UC's Heikenfeld provides the following top ten list of electronic paper devices that consumers can expect both near term and in the next ten to 20 years.

Heikenfeld is part of a UC team that specializes in research and development of e-devices.

Coming later this year:

  • Color e-readerswill be out in the consumer market by mid year in 2011. However, cautions Heikenfeld, the color will be muted as compared to what consumers are accustomed to, say, on an iPad. Researchers will continue to work toward next-generation (brighter) color in e-Readers as well as high-speed functionality that will eventually allow for point-and-click web browsing and video on devices like the Kindle.

Already in use but expansive adoption and breakthoughs imminent:

  • Electronic shelf labels in grocery stores.Currently, it takes an employee the whole day to label the shelves in a grocery store. Imagine the cost savings if all such labels could be updated within seconds -- allowing for, say, specials for one type of consumer who shops at 10 a.m. and updated specials for other shoppers stopping in at 5:30 p.m. Such electronic shelf labels are already in use in Europe and the West Coast and in limited, experimental use in other locales. The breakthrough for use of such electronic labels came when they could be implemented as low-power devices. Explained Heikenfeld,"The electronic labels basically only consume significant power when they are changed. When it's a set, static message and price, the e-shelf label is consuming such minimal power -- thanks to reflective display technology -- that it's highly economical and effective." The current e-shelf labels are monochrome, and researchers will keep busy to create high-color labels with low-power needs.
  • The new"no knobs" etch-a-sketch. This development allows children to draw with electronic ink and erase the whole screen with the push of a button. It was created based on technology developed in Ohio (Kent State University). Stated Heikenfeld,"Ohio institutions, namely the University of Cincinnati and Kent State, are international leaders in display and liquid optics technology."
  • Technology in hot-selling Glow Boards will soon come to signage. Crayola's Glow Board is partially based on UC technology developments, which Crayola then licensed. While the toy allows children to write on a surface that lights up, the technology has many applications, and consumers can expect to see those imminently. These include indoor and outdoor sign displays that when turned off, seem to be clear windows. (Current LCD -- liquid crystal display -- sign technology requires extremely high power usage, and when turned off, provide nothing more than a non-transparent black background.)

Coming within two years:

  • An e-device that will consume little power while also providing high function and color (video playing and web browsing) while also featuring good visibility in sunlight.Cautions Heikenfeld,"The color on this first-generation low-power, high-function e-device won't be as bright as what you get today from LCD (liquid crystal display) devices (like the iPad) that consume a lot of power. The color on the new low-power, high-function e-device will be about one third as bright as the color you commonly see on printed materials. Researchers, like those of us at UC, will continue to work to produce the Holy Grail of an e-device: bright color, high function (video and web browsing) with low power usage."

Coming within three to five years:

  • Color adaptable e-device casings.The color and/or designed pattern of the plastic casing that encloses your cell phone will be adaptable. In other words, you'll be able to change the color of the phone itself to a professional black-and-white for work or to a bright and vivid color pattern for a social outing."This is highly achievable," said Heikenfeld, adding,"It will be able to change color either automatically by reading the color of your outfit that day or by means of a downloaded app. It's possible because of low-power, reflective technology" (wherein the displayed pattern or color change is powered by available ambient light vs. powered by an electrical charge).

Expect the same feature to become available in devices like appliances."Yes," said Heikenfeld,"We'll see a color-changing app, so that you can have significant portions of your appliances be one color one day and a different color or pattern the next."

  • Bright-color but low-power digital billboards visible both night and day. Currently, the digital billboards commonly seen are based on LEDs (liquid crystal displays), which consume high levels of electric power and still lose color when in direct sunlight. Heikenfeld explained,"We have the technology that would allow these digital billboards to operate by simply reflecting ambient light, just like conventional printed billboards do. That means low power usage and good visibility for the displays even in bright sunlight. However, the color doesn't really sizzle yet, and many advertisers using billboards will not tolerate a washed-out color."
  • Foldable or roll-it-up e-devices. Expect that the first-generation foldable e-devices will be monochrome. Color will come later. The first foldable e-devices will come from Polymer Vision in the Netherlands. Color is expected later, using licensed UC-developed technology. The challenge, according to Heikenfeld, in creating foldable e-devices has been the device screen, which is currently made of rigid glass. But what if the screen were a paper-thin plastic that rolled like a window shade? You'd have a device like an iPad that could be folded or rolled up tens of thousands of times. Just roll it up and stick it in your pocket.

Within ten to 20 years:

  • e-Devices with magazine-quality color, viewable in bright sunlight but requiring low power."Think of this as the green iPad or e-Reader, combining high function and high color with low power requirements." said Heikenfeld.
  • The e-Sheet, a virtually indestructible e-device that will be as thin and as rollable as a rubber place mat.It will be full color and interactive, while requiring low power to operate since it will charge via sunlight and ambient room light. However, it will be so"tough" and only use wireless connection ports, such that you can leave it out over night in the rain. In fact, you'll be able to wash it or drop it without damaging the thin, highly flexible casing.


Source

Friday, February 11, 2011

CeBIT 2011: Cloud Computing for Administration

Researchers will be presenting these and other solutions on"Computing in the Cloud" at CeBIT in Hanover from March 1-5, 2011.

Cloud Computing is a tempting development for IT managers: with cloud computing, companies and organizations no longer have to acquire servers and software solutions themselves and instead rent the capacities they need for data, computing power and applications from professional providers. You only pay for what you use. In Germany, primarily companies are turning to cloud computing, transferring their data, applications and networks to server farms at Amazon, Google, IBM, Microsoft or other IT service providers. In the space of just a few years, cloud computing has emerged as a market worth billions, with a high level of importance for business-location policy in the German economy.

In autumn 2010, researchers from the Fraunhofer Institute for Open Communication System FOKUS in Berlin, together with their colleagues from the Hertie School of Governance, published a study,"Kooperatives eGovernment -- Cloud Computing für dieÖffentliche Verwaltung" {"Cooperative eGovernment: Cloud Computing for Public Administration"}. The study had been commissioned by ISPRAT, an organization dedicated to conduct interdisciplinary studies in politics, law, administration and technology. The study addresses the aspect of security, identifies risks, and uses various implementation scenarios to describe the benefits and advantages of this new technology for public administrators, with a particular focus on federal requirements in Germany.

"There are considerable reservations about cloud computing in the public-administration area. First, because of the fundamental need to protect citizens' personal data entrusted to public administrators; but also the potential of outsourcing processes are frightening in the eyes of the authorities. Due to fear of the loss of expertise, for one, and for another because the law requires that core tasks remain in the hands of administrators." This is how study co-author Linda Strick of FOKUS summarizes the status quo.

The study points out that cloud-specific security risks do in fact exist, but that these can be completely understood and analyzed."There is even reason to assume that cloud-based systems can actually fulfill higher security standards than classic solutions," Strick explains. To assist administrators with introduction of the new technology, FOKUS researchers in the eGovernment laboratory are developing application scenarios for a media-fragmentation-free and hence interoperable use of cloud-computing technologies.

A cockpit for security

To permit companies and public authorities to acquire practical experience with the new technology and test security concepts, experts from the Fraunhofer Institute for Secure Information Technology SIT in Munich have created a Cloud Computing Test Laboratory. Along with security concepts and technologies for cloud-computing providers, researchers there are also developing and studying strategies for secure integration of cloud services in existing IT infrastructures.

"In our test lab, function, reliability and interoperability tests, along with individual security analyses and penetration tests, can be carried out and all of the developmental phases considered, from the design of individual services to prototypes to the testing of fully functional comprehensive systems," notes Angelika Ruppel of SIT in Munich.

Working with the German federal office for information security {Bundesamt für Sicherheit in der Informationstechnik} BSI, her division has drafted minimum requirements for providers and has developed a Cloud Cockpit. With this solution, companies can securely transfer their data between different cloud systems while monitoring information relevant to security and data protection. Even the application of hybrid cloud infrastructures, with which companies can use both internal and external computing power, can be securely controlled using the Cloud Cockpit.


Source

Thursday, February 10, 2011

Ultrafast Quantum Computer Closer: Ten Billion Bits of Entanglement Achieved in Silicon

The researchers used high magnetic fields and low temperatures to produce entanglement between the electron and the nucleus of an atom of phosphorus embedded in a highly purified silicon crystal. The electron and the nucleus behave as a tiny magnet, or 'spin', each of which can represent a bit of quantum information. Suitably controlled, these spins can interact with each other to be coaxed into an entangled state -- the most basic state that cannot be mimicked by a conventional computer.

An international team from the UK, Japan, Canada and Germany, report their achievement in the journalNature.

'The key to generating entanglement was to first align all the spins by using high magnetic fields and low temperatures,' said Stephanie Simmons of Oxford University's Department of Materials, first author of the report. 'Once this has been achieved, the spins can be made to interact with each other using carefully timed microwave and radiofrequency pulses in order to create the entanglement, and then prove that it has been made.'

The work has important implications for integration with existing technology as it uses dopant atoms in silicon, the foundation of the modern computer chip. The procedure was applied in parallel to a vast number of phosphorus atoms.

'Creating 10 billion entangled pairs in silicon with high fidelity is an important step forward for us,' said co-author Dr John Morton of Oxford University's Department of Materials who led the team. 'We now need to deal with the challenge of coupling these pairs together to build a scalable quantum computer in silicon.'

In recent years quantum entanglement has been recognised as a key ingredient in building new technologies that harness quantum properties. Famously described by Einstein as"spooky action at distance" -- when two objects are entangled it is impossible to describe one without also describing the other and the measurement of one object will reveal information about the other object even if they are separated by thousands of miles.

Creating true entanglement involves crossing the barrier between the ordinary uncertainty encountered in our everyday lives and the strange uncertainties of the quantum world. For example, flipping a coin there is a 50% chance that it comes up heads and 50% tails, but we would never imagine the coin could land with both heads and tails facing upwards simultaneously: a quantum object such as the electron spin can do just that.

Dr Morton said: 'At high temperatures there is simply a 50/50 mixture of spins pointing in different directions but, under the right conditions, all the spins can be made to point in two opposing directions at the same time. Achieving this was critical to the generation of spin entanglement.'


Source

Wednesday, February 9, 2011

World's First Programmable Nanoprocessor: Nanowire Tiles Can Perform Arithmetic and Logical Functions

The groundbreaking prototype computer system, described in a paper appearing in the journalNature, represents a significant step forward in the complexity of computer circuits that can be assembled from synthesized nanometer-scale components.

It also represents an advance because these ultra-tiny nanocircuits can be programmed electronically to perform a number of basic arithmetic and logical functions.

"This work represents a quantum jump forward in the complexity and function of circuits built from the bottom up, and thus demonstrates that this bottom-up paradigm, which is distinct from the way commercial circuits are built today, can yield nanoprocessors and other integrated systems of the future," says principal investigator Charles M. Lieber, who holds a joint appointment at Harvard's Department of Chemistry and Chemical Biology and School of Engineering and Applied Sciences.

The work was enabled by advances in the design and synthesis of nanowire building blocks. These nanowire components now demonstrate the reproducibility needed to build functional electronic circuits, and also do so at a size and material complexity difficult to achieve by traditional top-down approaches.

Moreover, the tiled architecture is fully scalable, allowing the assembly of much larger and ever more functional nanoprocessors.

"For the past 10 to 15 years, researchers working with nanowires, carbon nanotubes, and other nanostructures have struggled to build all but the most basic circuits, in large part due to variations in properties of individual nanostructures," says Lieber, the Mark Hyman Professor of Chemistry."We have shown that this limitation can now be overcome and are excited about prospects of exploiting the bottom-up paradigm of biology in building future electronics."

An additional feature of the advance is that the circuits in the nanoprocessor operate using very little power, even allowing for their miniscule size, because their component nanowires contain transistor switches that are"nonvolatile."

This means that unlike transistors in conventional microcomputer circuits, once the nanowire transistors are programmed, they do not require any additional expenditure of electrical power for maintaining memory.

"Because of their very small size and very low power requirements, these new nanoprocessor circuits are building blocks that can control and enable an entirely new class of much smaller, lighter weight electronic sensors and consumer electronics," says co-author Shamik Das, the lead engineer in MITRE's Nanosystems Group.

"This new nanoprocessor represents a major milestone toward realizing the vision of a nanocomputer that was first articulated more than 50 years ago by physicist Richard Feynman," says James Ellenbogen, a chief scientist at MITRE.

Co-authors on the paper included four members of Lieber's lab at Harvard: Hao Yan (Ph.D. '10), SungWoo Nam (Ph.D. '10), Yongjie Hu (Ph.D. '10), and doctoral candidate Hwan Sung Choe, as well as collaborators at MITRE.

The research team at MITRE comprised Das, Ellenbogen, and nanotechnology laboratory director Jim Klemic. The MITRE Corporation is a not-for-profit company that provides systems engineering, research and development, and information technology support to the government. MITRE's principal locations are in Bedford, Mass., and McLean, Va.

The research was supported by a Department of Defense National Security Science and Engineering Faculty Fellowship, the National Nanotechnology Initiative, and the MITRE Innovation Program.


Source

Monday, February 7, 2011

Engineers Grow Nanolasers on Silicon, Pave Way for on-Chip Photonics

They describe their work in a paper to be published Feb. 6 in an advanced online issue of the journalNature Photonics.

"Our results impact a broad spectrum of scientific fields, including materials science, transistor technology, laser science, optoelectronics and optical physics," said the study's principal investigator, Connie Chang-Hasnain, UC Berkeley professor of electrical engineering and computer sciences.

The increasing performance demands of electronics have sent researchers in search of better ways to harness the inherent ability of light particles to carry far more data than electrical signals can. Optical interconnects are seen as a solution to overcoming the communications bottleneck within and between computer chips.

Because silicon, the material that forms the foundation of modern electronics, is extremely deficient at generating light, engineers have turned to another class of materials known as III-V (pronounced"three-five") semiconductors to create light-based components such as light-emitting diodes (LEDs) and lasers.

But the researchers pointed out that marrying III-V with silicon to create a single optoelectronic chip has been problematic. For one, the atomic structures of the two materials are mismatched.

"Growing III-V semiconductor films on silicon is like forcing two incongruent puzzle pieces together," said study lead author Roger Chen, a UC Berkeley graduate student in electrical engineering and computer sciences."It can be done, but the material gets damaged in the process."

Moreover, the manufacturing industry is set up for the production of silicon-based materials, so for practical reasons, the goal has been to integrate the fabrication of III-V devices into the existing infrastructure, the researchers said.

"Today's massive silicon electronics infrastructure is extremely difficult to change for both economic and technological reasons, so compatibility with silicon fabrication is critical," said Chang-Hasnain."One problem is that growth of III-V semiconductors has traditionally involved high temperatures -- 700 degrees Celsius or more -- that would destroy the electronics. Meanwhile, other integration approaches have not been scalable."

The UC Berkeley researchers overcame this limitation by finding a way to grow nanopillars made of indium gallium arsenide, a III-V material, onto a silicon surface at the relatively cool temperature of 400 degrees Celsius.

"Working at nanoscale levels has enabled us to grow high quality III-V materials at low temperatures such that silicon electronics can retain their functionality," said Chen.

The researchers used metal-organic chemical vapor deposition to grow the nanopillars on the silicon."This technique is potentially mass manufacturable, since such a system is already used commercially to make thin film solar cells and light emitting diodes," said Chang-Hasnain.

Once the nanopillar was made, the researchers showed that it could generate near infrared laser light -- a wavelength of about 950 nanometers -- at room temperature. The hexagonal geometry dictated by the crystal structure of the nanopillars creates a new, efficient, light-trapping optical cavity. Light circulates up and down the structure in a helical fashion and amplifies via this optical feedback mechanism.

The unique approach of growing nanolasers directly onto silicon could lead to highly efficient silicon photonics, the researchers said. They noted that the miniscule dimensions of the nanopillars -- smaller than one wavelength on each side, in some cases -- make it possible to pack them into small spaces with the added benefit of consuming very little energy

"Ultimately, this technique may provide a powerful and new avenue for engineering on-chip nanophotonic devices such as lasers, photodetectors, modulators and solar cells," said Chen.

"This is the first bottom-up integration of III-V nanolasers onto silicon chips using a growth process compatible with the CMOS (complementary metal oxide semiconductor) technology now used to make integrated circuits," said Chang-Hasnain."This research has the potential to catalyze an optoelectronics revolution in computing, communications, displays and optical signal processing. In the future, we expect to improve the characteristics of these lasers and ultimately control them electronically for a powerful marriage between photonic and electronic devices."

The Defense Advanced Research Projects Agency and a Department of Defense National Security Science and Engineering Faculty Fellowship helped support this research.


Source

Saturday, February 5, 2011

Computer-Assisted Diagnosis Tools to Aid Pathologists

"The advent of digital whole-slide scanners in recent years has spurred a revolution in imaging technology for histopathology," according to Metin N. Gurcan, Ph.D., an associate professor of Biomedical Informatics at The Ohio State University Medical Center."The large multi-gigapixel images produced by these scanners contain a wealth of information potentially useful for computer-assisted disease diagnosis, grading and prognosis."

Follicular Lymphoma (FL) is one of the most common forms of non-Hodgkin Lymphoma occurring in the United States. FL is a cancer of the human lymph system that usually spreads into the blood, bone marrow and, eventually, internal organs.

A World Health Organization pathological grading system is applied to biopsy samples; doctors usually avoid prescribing severe therapies for lower grades, while they usually recommend radiation and chemotherapy regimens for more aggressive grades.

Accurate grading of the pathological samples generally leads to a promising prognosis, but diagnosis depends solely upon a labor-intensive process that can be affected by human factors such as fatigue, reader variation and bias. Pathologists must visually examine and grade the specimens through high-powered microscopes.

Processing and analysis of such high-resolution images, Gurcan points out, remain non-trivial tasks, not just because of the sheer size of the images, but also due to complexities of underlying factors involving differences in staining, illumination, instrumentation and goals. To overcome many of these obstacles to automation, Gurcan and medical center colleagues, Dr. Gerard Lozanski and Dr. Arwa Shana'ah, turned to the Ohio Supercomputer Center.

Ashok Krishnamurthy, Ph.D., interim co-executive director of the center, and Siddharth Samsi, a computational science researcher there and an OSU graduate student in Electrical and Computer Engineering, put the power of a supercomputer behind the process.

"Our group has been developing tools for grading of follicular lymphoma with promising results," said Samsi."We developed a new automated method for detecting lymph follicles using stained tissue by analyzing the morphological and textural features of the images, mimicking the process that a human expert might use to identify follicle regions. Using these results, we developed models to describe tissue histology for classification of FL grades."

Histological grading of FL is based on the number of large malignant cells counted in within tissue samples measuring just 0.159 square millimeters and taken from ten different locations. Based on these findings, FL is assigned to one of three increasing grades of malignancy: Grade I (0-5 cells), Grade II (6-15 cells) and Grade III (more than 15 cells).

"The first step involves identifying potentially malignant regions by combining color and texture features," Samsi explained."The second step applies an iterative watershed algorithm to separate merged regions and the final step involves eliminating false positives."

The large data sizes and complexity of the algorithms led Gurcan and Samsi to leverage the parallel computing resources of OSC's Glenn Cluster in order to reduce the time required to process the images. They used MATLAB® and the Parallel Computing Toolbox™ to achieve significant speed-ups. Speed is the goal of the National Cancer Institute-FUNDED research project, but accuracy is essential. Gurcan and Samsi compared their computer segmentation results with manual segmentation and found an average similarity score of 87.11 percent.

"This algorithm is the first crucial step in a computer-aided grading system for Follicular Lymphoma," Gurcan said."By identifying all the follicles in a digitized image, we can use the entire tissue section for grading of the disease, thus providing experts with another tool that can help improve the accuracy and speed of the diagnosis."


Source

Friday, February 4, 2011

Future Surgeons May Use Robotic Nurse, 'Gesture Recognition'

Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

The"vision-based hand gesture recognition" technology could have other applications, including the coordination of emergency response activities during disasters.

"It's a concept Tom Cruise demonstrated vividly in the film 'Minority Report,'" Wachs said.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria.

The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.

At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency, Wachs said.

Findings from the research will be detailed in a paper appearing in the February issue of Communications of the ACM, the flagship publication of the Association for Computing Machinery. The paper was written by researchers at Purdue, the Naval Postgraduate School in Monterey, Calif., and Ben-Gurion University of the Negev, Israel.

Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively.

He is now working to extend the system's capabilities in research with Purdue's School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.

"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," Wachs said."You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use."

Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures.

"Say the surgeon starts talking to another person in the operating room and makes conversational gestures," Wachs said."You don't want the robot handing the surgeon a hemostat."

A scrub nurse assists the surgeon and hands the proper surgical instruments to the doctor when needed.

"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room," Wachs said."In that case, a robotic scrub nurse could be better."

The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine.

Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, little work has been done in the area of gesture recognition, Wachs said.

"Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed," he said.

Wachs is developing advanced algorithms that isolate the hands and apply"anthropometry," or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.

"Another contribution is that by tracking a surgical instrument inside the patient's body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images," Wachs said."This is done using a different sensor mounted over the surgical lights."

The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person's hands without the use of a wand.

"You just step into the operating room, and automatically your body is mapped in 3-D," he said.

Accuracy and gesture-recognition speed depend on advanced software algorithms.

"Even if you have the best camera, you have to know how to program the camera, how to use the images," Wachs said."Otherwise, the system will work very slowly."

The research paper defines a set of requirements, including recommendations that the system should:

  • Use a small vocabulary of simple, easily recognizable gestures.
  • Not require the user to wear special virtual reality gloves or certain types of clothing.
  • Be as low-cost as possible.
  • Be responsive and able to keep up with the speed of a surgeon's hand gestures.
  • Let the user know whether it understands the hand gestures by providing feedback, perhaps just a simple"OK."
  • Use gestures that are easy for surgeons to learn, remember and carry out with little physical exertion.
  • Be highly accurate in recognizing hand gestures.
  • Use intuitive gestures, such as two fingers held apart to mimic a pair of scissors.
  • Be able to disregard unintended gestures by the surgeon, perhaps made in conversation with colleagues in the operating room.
  • Be able to quickly configure itself to work properly in different operating rooms, under various lighting conditions and other criteria.

"Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition," Wachs said."Much is already known about voice recognition."

The work is funded by the U.S. Agency for Healthcare Research and Quality.


Source

Thursday, February 3, 2011

New Mathematical Model of Information Processing in the Brain Accurately Predicts Some of the Peculiarities of Human Vision

At the Society of Photo-Optical Instrumentation Engineers' Human Vision and Electronic Imaging conference on Jan. 27, Ruth Rosenholtz, a principal research scientist in the Department of Brain and Cognitive Sciences, presented a new mathematical model of how the brain does that summarizing. The model accurately predicts the visual system's failure on certain types of image-processing tasks, a good indication that it captures some aspect of human cognition.

Most models of human object recognition assume that the first thing the brain does with a retinal image is identify edges -- boundaries between regions with different light-reflective properties -- and sort them according to alignment: horizontal, vertical and diagonal. Then, the story goes, the brain starts assembling these features into primitive shapes, registering, for instance, that in some part of the visual field, a horizontal feature appears above a vertical feature, or two diagonals cross each other. From these primitive shapes, it builds up more complex shapes -- four L's with different orientations, for instance, would make a square -- and so on, until it's constructed shapes that it can identify as features of known objects.

While this might be a good model of what happens at the center of the visual field, Rosenholtz argues, it's probably less applicable to the periphery, where human object discrimination is notoriously weak. In a series of papers in the last few years, Rosenholtz has proposed that cognitive scientists instead think of the brain as collecting statistics on the features in different patches of the visual field.

Patchy impressions

On Rosenholtz's model, the patches described by the statistics get larger the farther they are from the center. This corresponds with a loss of information, in the same sense that, say, the average income for a city is less informative than the average income for every household in the city. At the center of the visual field, the patches might be so small that the statistics amount to the same thing as descriptions of individual features: A 100-percent concentration of horizontal features could indicate a single horizontal feature. So Rosenholtz's model would converge with the standard model.

But at the edges of the visual field, the models come apart. A large patch whose statistics are, say, 50 percent horizontal features and 50 percent vertical could contain an array of a dozen plus signs, or an assortment of vertical and horizontal lines, or a grid of boxes.

In fact, Rosenholtz's model includes statistics on much more than just orientation of features: There are also measures of things like feature size, brightness and color, and averages of other features -- about 1,000 numbers in all. But in computer simulations, storing even 1,000 statistics for every patch of the visual field requires only one-90th as many virtual neurons as storing visual features themselves, suggesting that statistical summary could be the type of space-saving technique the brain would want to exploit.

Rosenholtz's model grew out of her investigation of a phenomenon called visual crowding. If you were to concentrate your gaze on a point at the center of a mostly blank sheet of paper, you might be able to identify a solitary A at the left edge of the page. But you would fail to identify an identical A at the right edge, the same distance from the center, if instead of standing on its own it were in the center of the word"BOARD."

Rosenholtz's approach explains this disparity: The statistics of the lone A are specific enough to A's that the brain can infer the letter's shape; but the statistics of the corresponding patch on the other side of the visual field also factor in the features of the B, O, R and D, resulting in aggregate values that don't identify any of the letters clearly.

Road test

Rosenholtz's group has also conducted a series of experiments with human subjects designed to test the validity of the model. Subjects might, for instance, be asked to search for a target object -- like the letter O -- amid a sea of"distractors" -- say, a jumble of other letters. A patch of the visual field that contains 11 Q's and one O would have very similar statistics to one that contains a dozen Q's. But it would have much different statistics than a patch that contained a dozen plus signs. In experiments, the degree of difference between the statistics of different patches is an extremely good predictor of how quickly subjects can find a target object: It's much easier to find an O among plus signs than it is to find it amid Q's.

Rosenholtz, who has a joint appointment to the Computer Science and Artificial Intelligence Laboratory, is also interested in the implications of her work for data visualization, an active research area in its own right. For instance, designing subway maps with an eye to maximizing the differences between the summary statistics of different regions could make them easier for rushing commuters to take in at a glance.

In vision science,"there's long been this notion that somehow what the periphery is for is texture," says Denis Pelli, a professor of psychology and neural science at New York University. Rosenholtz's work, he says,"is turning it into real calculations rather than just a side comment." Pelli points out that the brain probably doesn't track exactly the 1,000-odd statistics that Rosenholtz has used, and indeed, Rosenholtz says that she simply adopted a group of statistics commonly used to describe visual data in computer vision research. But Pelli also adds that visual experiments like the ones that Rosenholtz is performing are the right way to narrow down the list to"the ones that really matter."


Source

Wednesday, February 2, 2011

Internet Addresses: An Inevitable Shortage, but an Uneven One

There is some good news, according to computer scientist John Heideman, who heads a team at the USC Viterbi School of Engineering Information Sciences Institute that has just released its results in the form of a detailed outline, including a 10-minute video and an interactive web browser that allows users to explore the nooks and crannies of Internet space themselves.

Heidemann who is a senior project leader at ISI and a research associate professor in the USC Viterbi School of Engineering Department of Computer Science, says his group has found that while some of the already allocated address blocks (units of Internet real estate, ranging from 256 to more than 16 million addresses) are heavily used, many are still sparsely used."Even allowing for undercount," the group finds,"probably only 14 percent of addresses are visible on the public Internet."

Nevertheless,"as full allocation happens, there will be pressure to improve utilization and eventually trade underutilized areas," the video shows. These strategies have limits, the report notes. Better utilization, trading, and other strategies can recover"twice or four times current utilization. But requests for address double every year, so trading will only help for two years. Four billion addresses are just not enough for 7 billion people."

The IPv6 protocol allows many, many more addresses -- 1000 1000 trillion -- but may involve transition costs.

Heideman's group report comes as the Number Resource Organization (NRO) and he Internet Assigned Numbers Authority (IANA) are preparing to make an announcement saying they have given out all the addresses, passing on most to regional authorities.

The ISI video offers a thorough background in the hows and whys of the current IPv4 Internet address system, in which each address is a number between zero and 2 to the 32nd power (4,294,967,295), usually written in"dotted-decimal notation" as four base-10 numbers separated by periods.

Heidemann, working with collaborator Yuri Pradkin and ISI colleagues, produced an earlier Internet census in 2007, following on previous work at ISI -- the first complete census since 1982. To do it, they sent a message ('ping') each to each possible Internet address. The video explains the pinging process.

At the time, some 2.8 million of the 4.3 million possible addresses had been allocated; today more than 3.5 million are allocated. The current effort, funded by Department of Homeland Security Science and Technology Directorate and the NSF, was carried out by Aniruddh Rao and Xue Cui of ISI, along with Heidemann. Peer-reviewed analysis of their approach appeared in ACM Internet Measurements Conference, 2008.


Source

Tuesday, February 1, 2011

Physicists Challenge Classical World With Quantum-Mechanical Implementation of 'Shell Game'

In a paper published in the Jan. 30 issue of the journalNature Physics, UCSB researchers show the first demonstration of the coherent control of a multi-resonator architecture. This topic has been a holy grail among physicists studying photons at the quantum-mechanical level for more than a decade.

The UCSB researchers are Matteo Mariantoni, postdoctoral fellow in the Department of Physics; Haohua Wang, postdoctoral fellow in physics; John Martinis, professor of physics; and Andrew Cleland, professor of physics.

According to the paper, the"shell man," the researcher, makes use of two superconducting quantum bits (qubits) to move the photons -- particles of light -- between the resonators. The qubits -- the quantum-mechanical equivalent of the classical bits used in a common PC -- are studied at UCSB for the development of a quantum super computer. They constitute one of the key elements for playing the photon shell game.

"This is an important milestone toward the realization of a large-scale quantum register," said Mariantoni."It opens up an entirely new dimension in the realm of on-chip microwave photonics and quantum-optics in general."

The researchers fabricated a chip where three resonators of a few millimeters in length are coupled to two qubits."The architecture studied in this work resembles a quantum railroad," said Mariantoni."Two quantum stations -- two of the three resonators -- are interconnected through the third resonator which acts as a quantum bus. The qubits control the traffic and allow the shuffling of photons among the resonators."

In a related experiment, the researchers played a more complex game that was inspired by an ancient mathematical puzzle developed in an Indian temple called the Towers of Hanoi, according to legend.

The Towers of Hanoi puzzle consists of three posts and a pile of disks of different diameter, which can slide onto any post. The puzzle starts with the disks in a stack in ascending order of size on one post, with the smallest disk at the top. The aim of the puzzle is to move the entire stack to another post, with only one disk being moved at a time, and with no disk being placed on top of a smaller disk.

In the quantum-mechanical version of the Towers of Hanoi, the three posts are represented by the resonators and the disks by quanta of light with different energy."This game demonstrates that a truly Bosonic excitation can be shuffled among resonators -- an interesting example of the quantum-mechanical nature of light," said Mariantoni.

Mariantoni was supported in this work by an Elings Prize Fellowship in Experimental Science from UCSB's California NanoSystems Institute.


Source