Thursday, March 31, 2011

Physicists Rotate Beams of Light

Light waves can oscillate in different directions -- much like a string that can vibrate up and down or left and right -- depending on the direction in which it is picked. This is called the polarization of light. Physicists at the Vienna University of Technology have now, together with researchers at Würzburg University, developed a method to control and manipulate the polarization of light using ultra thin layers of semiconductor material.

For future research on light and its polarization this is an important step forward -- and this breakthrough could even open up possibilities for completely new computer technology. The experiment can be viewed as the optical version of an electronic transistor. The results of the experiment have now been published in the journalPhysical Review Letters.

Controlling light with magnetic fields

The polarization of light can change, when it passes through a material in a strong magnetic field. This phenomenon is known as the"Faraday effect.""So far, however, this effect had only been observed in materials in which it was very weak," professor Andrei Pimenov explains. He carried out the experiments at the Institute for Solid State Physics of the TU Vienna, together with his assistant Alexey Shuvaev. Using light of the right wavelength and extremely clean semiconductors, scientists in Vienna and Würzburg could achieve a Faraday effect which is orders of magnitude stronger than ever measured before.

Now light waves can be rotated into arbitrary directions -- the direction of the polarization can be tuned with an external magnetic field. Surprisingly, an ultra-thin layer of less than a thousandth of a millimeter is enough to achieve this."Such thin layers made of other materials could only change the direction of polarization by a fraction of one degree," says professor Pimenov. If the beam of light is then sent through a polarization filter, which only allows light of a particular direction of polarization to pass, the scientists can, rotating the direction appropriately, decide whether the beam should pass or not.

The key to this astonishing effect lies in the behavior of the electrons in the semiconductor. The beam of light oscillates the electrons, and the magnetic field deflects their vibrating motion. This complicated motion of the electrons in turn affects the beam of light and changes its direction of polarization.

An optical transistor

In the experiment, a layer of the semiconductor mercury telluride was irradiated with light in the infrared spectral range."The light has a frequency in the terahertz domain -- those are the frequencies, future generations of computers may operate with," professor Pimenov believes."For years, the clock rates of computers have not really increased, because a domain has been reached, in which material properties just don't play along anymore." A possible solution is to complement electronic circuits with optical elements. In a transistor, the basic element of electronics, an electric current is controlled by an external signal. In the experiment at TU Vienna, a beam of light is controlled by an external magnetic field. The two systems are very much alike."We could call our system a light-transistor," Pimenov suggests.

Before optical circuits for computers can be considered, the newly discovered effect will prove useful as a tool for further research. In optics labs, it will play an important role in research on new materials and the physics of light.


Source

Thursday, March 24, 2011

BrainGate Neural Interface System Reaches 1,000-Day Performance Milestone

Results from five consecutive days of device use surrounding her 1,000th day in the device trial appeared online March 24 in theJournal of Neural Engineering.

"This proof of concept -- that after 1,000 days a woman who has no functional use of her limbs and is unable to speak can reliably control a cursor on a computer screen using only the intended movement of her hand -- is an important step for the field," said Dr. Leigh Hochberg, a Brown engineering associate professor, VA rehabilitation researcher, visiting associate professor of neurology at Harvard Medical School, and director of the BrainGate pilot clinical trial at MGH.

The woman, identified in the paper as S3, performed two"point-and-click" tasks each day by thinking about moving the cursor with her hand. In both tasks she averaged greater than 90 percent accuracy. Some on-screen targets were as small as the effective area of a Microsoft Word menu icon.

In each of S3's two tasks, performed in 2008, she controlled the cursor movement and click selections continuously for 10 minutes. The first task was to move the cursor to targets arranged in a circle and in the center of the screen, clicking to select each one in turn. The second required her to follow and click on a target as it sequentially popped up with varying size at random points on the screen.

From fundamental neuroscience to clinical utility

Under development since 2002, the investigational BrainGate system is a combination of hardware and software that directly senses electrical signals produced by neurons in the brain that control movement. By decoding those signals and translating them into digital instructions, the system is being evaluated for its ability to give people with paralysis control of external devices such as computers, robotic assistive devices, or wheelchairs. The BrainGate team is also engaged in research toward control of advanced prosthetic limbs and toward direct intracortical control of functional electrical stimulation devices for people with spinal cord injury, in collaboration with researchers at the Cleveland FES Center.

The system is currently in pilot clinical trials, directed by Hochberg at MGH.

BrainGate uses a tiny (4x4 mm, about the size of a baby aspirin) silicon electrode array to read neural signals directly within brain tissue. Although external sensors placed on the brain or skull surface can also read neural activity, they are believed to be far less precise. In addition, many prototype brain implants have eventually failed because of moisture or other perils of the internal environment.

"Neuroengineers have often wondered whether useful signals could be recorded from inside the brain for an extended period of time," Hochberg said."This is the first demonstration that this microelectrode array technology can provide useful neuroprosthetic signals allowing a person with tetraplegia to control an external device for an extended period of time."

Moving forward

Device performance was not the same at 2.7 years as it was earlier on, Hochberg added. At 33 months fewer electrodes were recording useful neural signals than after only six months. But John Donoghue -- VA senior research career scientist, Henry Merritt Wriston Professor of Neuroscience, director of the Brown Institute for Brain Science, and original developer of the BrainGate system -- said no evidence has emerged of any fundamental incompatibility between the sensor and the brain. Instead, it appears that decreased signal quality over time can largely be attributed to engineering, mechanical or procedural issues. Since S3's sensor was built and implanted in 2005, the sensor's manufacturer has reported continual quality improvements. The data from this study will be used to further understand and modify the procedures or device to further increase durability.

"None of us will be fully satisfied with an intracortical recording device until it provides decades of useful signals," Hochberg said."Nevertheless, I'm hopeful that the progress made in neural interface systems will someday be able to provide improved communication, mobility, and independence for people with locked-in syndrome or other forms of paralysis and eventually better control over prosthetic, robotic, or functional electrical stimulation systems {stimulating electrodes that have already returned limb function to people with cervical spinal cord injury}, even while engineers continue to develop ever-better implantable sensors."

In addition to demonstrating the very encouraging longevity of the BrainGate sensor, the paper also presents an advance in how the performance of a brain-computer interface can be measured, Simeral said."As the field continues to evolve, we'll eventually be able to compare and contrast technologies effectively."

As for S3, who had a brainstem stroke in the mid-1990s and is now in her late 50s, she continues to participate in trials with the BrainGate system, which continues to record useful signals, Hochberg said. However, data beyond the 1000th day in 2008 has thus far only been presented at scientific meetings, and Hochberg can only comment on data that has already completed the scientific peer review process and appeared in publication.

In addition to Simeral, Hochberg, and Donoghue, other authors are Brown computer scientist Michael Black and former Brown computer scientist Sung-Phil Kim.

About the BrainGate collaboration

This advance is the result of the ongoing collaborative BrainGate research at Brown University, Massachusetts General Hospital, and Providence VA Medical Center. The BrainGate research team is focused on developing and testing neuroscientifically inspired technologies to improve the communication, mobility, and independence of people with neurologic disorders, injury, or limb loss.

For more information, visitwww.braingate2.org.

The implanted microelectrode array and associated neural recording hardware used in the BrainGate research are manufactured by BlackRock Microsystems, LLC (Salt Lake City, UT).

This research was funded in part by the Rehabilitation Research and Development Service, Department of Veterans Affairs; The National Institutes of Health (NIH), including NICHD-NCMRR, NINDS/NICHD, NIDCD/ARRA, NIBIB, NINDS-Javits; the Doris Duke Charitable Foundation; MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke; and the Katie Samson Foundation.

The BrainGate pilot clinical trial was previously directed by Cyberkinetics Neurotechnology Systems, Inc., Foxborough, MA (CKI). CKI ceased operations in 2009. The clinical trials of the BrainGate2 Neural Interface System are now administered by Massachusetts General Hospital, Boston, Mass. Donoghue is a former chief scientific officer and a former director of CKI; he held stocks and received compensation. Hochberg received research support from Massachusetts General and Spaulding Rehabilitation Hospitals, which in turn received clinical trial support from Cyberkinetics. Simeral received compensation as a consultant to CKI.


Source

Tuesday, March 22, 2011

Simulating Tomorrow's Accelerators at Near the Speed of Light

But realizing the promise of laser-plasma accelerators crucially depends on being able to simulate their operation in three-dimensional detail. Until now such simulations have challenged or exceeded even the capabilities of supercomputers.

A team of researchers led by Jean-Luc Vay of Berkeley Lab's Accelerator and Fusion Research Division (AFRD) has borrowed a page from Einstein to perfect a revolutionary new method for calculating what happens when a laser pulse plows through a plasma in an accelerator like BELLA. Using their"boosted-frame" method, Vay's team has achieved full 3-D simulations of a BELLA stage in just a few hours of supercomputer time, calculations that would have been beyond the state of the art just two years ago.

Not only are the recent BELLA calculations tens of thousands of times faster than conventional methods, they overcome problems that plagued previous attempts to achieve the full capacity of the boosted-frame method, such as violent numerical instabilities. Vay and his colleagues, Cameron Geddes of AFRD, Estelle Cormier-Michel of the Tech-X Corporation in Denver, and David Grote of Lawrence Livermore National Laboratory, publish their latest findings in the March, 2011 issue of the journalPhysics of Plasma Letters.

Space, time, and complexity

The boosted-frame method, first proposed by Vay in 2007, exploits Einstein's Special Theory of Relativity to overcome difficulties posed by the huge range of space and time scales in many accelerator systems. Vast discrepancies of scale are what made simulating these systems too costly.

"Most researchers assumed that since the laws of physics are invariable, the huge complexity of these systems must also be invariable," says Vay."But what are the appropriate units of complexity? It turns out to depend on how you make the measurements."

Laser-plasma wakefield accelerators are particularly challenging: they send a very short laser pulse through a plasma measuring a few centimeters or more, many orders of magnitude longer than the pulse itself (or the even-shorter wavelength of its light). In its wake, like a speedboat on water, the laser pulse creates waves in the plasma. These alternating waves of positively and negatively charged particles set up intense electric fields. Bunches of free electrons, shorter than the laser pulse,"surf" the waves and are accelerated to high energies.

"The most common way to model a laser-plasma wakefield accelerator in a computer is by representing the electromagnetic fields as values on a grid, and the plasma as particles that interact with the fields," explains Geddes, a member of the BELLA science staff who has long worked on laser-plasma acceleration."Since you have to resolve the finest structures -- the laser wavelength, the electron bunch -- over the relatively enormous length of the plasma, you need a grid with hundreds of millions of cells."

The laser period must also be resolved in time, and calculated over millions of time steps. As a result, while much of the important physics of BELLA is three-dimensional, direct 3-D simulation was initially impractical. Just a one-dimensional simulation of BELLA required 5,000 hours of supercomputer processor time at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC).

Choosing the right frame

The key to reducing complexity and cost lies in choosing the right point of view, or"reference frame." When Albert Einstein was 16 years old he imagined riding along in a frame moving with a beam of light -- a thought experiment that, 10 years later, led to his Special Theory of Relativity, which establishes that there is no privileged reference frame. Observers moving at different velocities may experience space and time differently and even see things happening in a different order, but calculations from any point of view can recover the same physical result.

Among the consequences are that the speed of light in a vacuum is always the same; compared to a stationary observer's experience, time moves more slowly while space contracts for an observer traveling near light speed. These different points of view are called Lorentz frames, and changing one for another is called a Lorentz transformation. The"boosted frame" of the laser pulse is the key to enabling calculations of laser-plasma wakefield accelerators that would otherwise be inaccessible.

A laser pulse pushing through a tenuous plasma moves only a little slower than light through a vacuum. An observer in the stationary laboratory frame sees it as a rapid oscillation of electromagnetic fields moving through a very long plasma, whose simulation requires high resolution and many time steps. But for an observer moving with the pulse, time slows, and the frequency of the oscillations is greatly reduced; meanwhile space contracts, and the plasma becomes much shorter. Thus relatively few time steps are needed to model the interaction between the laser pulse, the plasma waves formed in its wake, and the bunches of electrons riding the wakefield through the plasma. Fewer steps mean less computer time.

Eliminating instability

Early attempts to apply the boosted-frame method to laser-plasma wakefield simulations encountered numerical instabilities that limited how much the calculation frame could be boosted. Calculations could still be speeded up tens or even hundreds of times, but the full promise of the method could not be realized.

Vay's team showed that using a particular boosted frame, that of the wakefield itself -- in which the laser pulse is almost stationary -- realizes near-optimal speedup of the calculation. And it fundamentally modifies the appearance of the laser in the plasma. In the laboratory frame the observer sees many oscillations of the electromagnetic field in the laser pulse; in the frame of the wake, the observer sees just a few at a time.

Not only is speedup possible because of the coarser resolution, but at the same time numerical instabilities due to short wavelengths can be suppressed without affecting the laser pulse. Combined with special techniques for interpreting the data between frames, this allows the full potential of the boosted-frame principle to be reached.

"We produced the first full multidimensional simulation of the 10 billion-electron-volt design for BELLA," says Vay."We even ran simulations all the way up to a trillion electron volts, which establishes our ability to model the behavior of laser-plasma wakefield accelerator stages at varying energies. With this calculation we achieved the theoretical maximum speedup of the boosted-frame method for such systems -- a million times faster than similar calculations in the laboratory frame."

Simulations will still be challenging, especially those needed to tailor applications of high-energy laser-plasma wakefield accelerators to such uses as free-electron lasers for materials and biological sciences, or for homeland security or other research. But the speedup achieves what might otherwise have been virtually impossible: it puts the essential high-resolution simulations within reach of new supercomputers.

This work was supported by the U.S. Department of Energy's Office of Science, including calculations with the WARP beam-simulation code and other applications at the National Energy Research Scientific Computing Center (NERSC).


Source

Friday, March 18, 2011

Bomb Disposal Robot Getting Ready for Front-Line Action

The organisations have come together to create a lightweight, remote-operated vehicle, or robot, that can be controlled by a wireless device, not unlike a games console, from a distance of several hundred metres.

The innovative robot, which can climb stairs and even open doors, will be used by soldiers on bomb disposal missions in countries such as Afghanistan.

Experts from the Department of Computer& Communications Engineering, based within the university's School of Engineering, are working on the project alongside NIC Instruments Limited of Folkestone, manufacturers of security search and bomb disposal equipment.

Much lighter and more flexible than traditional bomb disposal units, the robot is easier for soldiers to carry and use when out in the field. It has cameras on board, which relay images back to the operator via the hand-held control, and includes a versatile gripper which can carry and manipulate delicate items.

The robot also includes nuclear, biological and chemical weapons sensors.

Measuring just 72cm by 35cm, the robot weighs 48 kilogrammes and can move at speeds of up to eight miles per hour.


Source

Wednesday, March 16, 2011

Room-Temperature Spintronic Computers Coming Soon? Silicon Spin Transistors Heat Up and Spins Last Longer

"Electronic devices mostly use the charge of the electrons -- a negative charge that is moving," says Ashutosh Tiwari, an associate professor of materials science and engineering at the University of Utah."Spintronic devices will use both the charge and the spin of the electrons. With spintronics, we want smaller, faster and more power-efficient computers and other devices."

Tiwari and Ph.D. student Nathan Gray report their creation of room-temperature, spintronic transistors on a silicon semiconductor this month in the journalApplied Physics Letters. The research -- in which electron"spin" aligned in a certain way was injected into silicon chips and maintained for a record 276 trillionths of a second -- was funded by the National Science Foundation.

"Almost every electronic device has silicon-based transistors in it," Gray says."The current thrust of industry has been to make those transistors smaller and to add more of them into the same device" to process more data. He says his and Tiwari's research takes a different approach.

"Instead of just making transistors smaller and adding more of them, we make the transistors do more work at the same size because they have two different ways {electron charge and spin} to manipulate and process data," says Gray.

A Quick Spin through Spintronics

Modern computers and other electronic devices work because negatively charged electrons flow as electrical current. Transistors are switches that reduce computerized data to a binary code of ones or zeros represented by the presence or absence of electrons in semiconductors, most commonly silicon.

In addition to electric charge, electrons have another property known as spin, which is like the electron's intrinsic angular momentum. An electron's spin often is described as a bar magnet that points up or down, which also can represent ones and zeroes for computing.

Most previous research on spintronic transistors involved using optical radiation -- in the form of polarized light from lasers -- to orient the electron spins in non-silicon materials such as gallium arsenide or organic semiconductors at supercold temperatures.

"Optical methods cannot do that with silicon, which is the workhorse of the semiconductor and electronics industry, and the industry doesn't want to retool for another material," Tiwari says.

"Spintronics will become useful only if we use silicon," he adds.

The Experiment

In the new study, Tiwari and Gray used electricity and magnetic fields to inject"spin polarized carriers" -- namely, electrons with their spins aligned either all up or all down -- into silicon at room temperature.

Their trick was to use magnesium oxide as a"tunnel barrier" to get the aligned electron spins to travel from one nickel-iron electrode through the silicon semiconductor to another nickel-iron electrode. Without the magnesium oxide, the spins would get randomized almost immediately, with half up and half down, Gray says.

"This thing works at room temperature," Tiwari says."Most of the devices in earlier studies have to be cooled to very low temperatures" -- colder than 200 below zero Fahrenheit -- to align the electrons' spins either all up or all down."Our new way of putting spin inside the silicon does not require any cooling."

The experiment used a flat piece of silicon about 1 inch long, about 0.3 inches wide and one-fiftieth of an inch thick. An ultra-thin layer of magnesium oxide was deposited on the silicon wafer. Then, one dozen tiny transistors were deposited on the silicon wafer so they could be used to inject electrons with aligned spins into the silicon and later detect them.

Each nickel-iron transistor had three contacts or electrodes: one through which electrons with aligned spins were injected into the silicon and detected, a negative electrode and a positive electrode used to measure voltage.

During the experiment, the researchers send direct current through the spin-injector electrode and negative electrode of each transistor. The current is kept steady, and the researchers measure variations in voltage while applying a magnetic field to the apparatus

"By looking at the change in the voltage when we apply a magnetic field, we can find how much spin has been injected and the spin lifetime," Tiwari says.

A 328 Nanometer, 276 Picosecond Step for Spintronics

For spintronic devices to be practical, electrons with aligned spins need to be able to move adequate distances and retain their spin alignments for an adequate time.

During the new study, the electrons retained their spins for 276 picoseconds, or 276 trillionths of a second. And based on that lifetime, the researchers calculate the spin-aligned electrons moved through the silicon 328 nanometers, which is 328 billionths of a meter or about 13 millionths of an inch.

"It's a tiny distance for us, but in transistor technology, it is huge," Gray says."Transistors are so small today that that's more than enough to get the electron where we need it to go."

"Those are very good numbers," Tiwari says."These numbers are almost 10 times bigger than what we need {for spintronic devices} and two times bigger than if you use aluminum oxide" instead of the magnesium oxide in his study.

He says Dutch researchers previously were able to inject aligned spins into silicon using aluminum oxide as the"tunneling medium," but the new study shows magnesium oxide works better.

The new study's use of electronic spin injection is much more practical than using optical methods such as lasers because lasers are too big for chips in consumer electronic devices, Tiwari says.

He adds that spintronic computer processors require little power compared with electronic devices, so a battery that may power an electronic computer for eight hours might last more than 24 hours on a spintronic computer.

Gray says spintronics is"the next big step to push the limits of semiconductor technology that we see in every aspect of our lives: computers, cell phones, GPS (navigation) devices, iPods, TVs."


Source

Thursday, March 10, 2011

Web-Crawling the Brain: 3-D Nanoscale Model of Neural Circuit Created

Researchers in Harvard Medical School's Department of Neurobiology have developed a technique for unraveling these masses. Through a combination of microscopy platforms, researchers can crawl through the individual connections composing a neural network, much as Google crawls Web links.

"The questions that such a technique enables us to address are too numerous even to list," said Clay Reid, HMS professor of neurobiology and senior author on a paper reporting the findings in the March 10 edition ofNature.

The cerebral cortex is arguably the most important part of the mammalian brain. It processes sensory input, reasoning and, some say, even free will. For the past century, researchers have understood the broad outline of cerebral cortex anatomy. In the past decade, imaging technologies have allowed us to see neurons at work within a cortical circuit, to watch the brain process information.

But while these platforms can show us what a circuit does, they don't show us how it operates.

For many years, Reid's lab has been studying the cerebral cortex, adapting ways to hone the detail with which we can view the brain at work. Recently they and others have succeeded in isolating the activities of individual neurons, watching them fire in response to external stimuli.

The ultimate prize, however, would be to get inside a single cortical circuit and probe the architecture of its wiring.

Just one of these circuits, however, contains between 10,000 and 100,000 neurons, each of which makes about 10,000 interconnections, totaling upwards of 1 billion connections -- all within a single circuit."This is a radically hard problem to address," Reid said.

Reid's team, which included Davi Bock, then a graduate student, and postdoctoral researcher Wei-Chung Allen Lee, embarked on a two-part study of the pinpoint-sized region of a mouse brain that is involved in processing vision. They first injected the brain with dyes that flashed whenever specific neurons fired and recorded the firings using a laser-scanning microscope. They then conducted a large anatomy experiment, using electron microscopy to see the same neurons and hundreds of others with nanometer resolution.

Using a new imaging system they developed, the team recorded more than 3 million high-resolution images. They sent them to the Pittsburgh Supercomputing Center at Carnegie Mellon University, where researchers stitched them into 3-D images. Using the resulting images, Bock, Lee and laboratory technician Hyon Suk Kim selected 10 individual neurons and painstakingly traced many of their connections, crawling through the brain's dense thicket to create a partial wiring diagram.

This model also yielded some interesting insights into how the brain functions. Reid's group found that neurons tasked with suppressing brain activity seem to be randomly wired, putting the lid on local groups of neurons all at once rather than picking and choosing. Such findings are important because many neurological conditions, such as epilepsy, are the result of neural inhibition gone awry.

"This is just the iceberg's tip," said Reid."Within ten years I'm convinced we'll be imaging the activity of thousands of neurons in a living brain. In a visual circuit, we'll interpret the data to reconstruct what an animal actually sees. By that time, with the anatomical imaging, we'll also know how it's all wired together."

For now, Reid and his colleagues are working to scale up this platform to generate larger data sets.

"How the brain works is one of the greatest mysteries in nature," Reid added,"and this research presents a new and powerful way for us to explore that mystery."

This research was funded by the Center for Brain Science at Harvard University, Microsoft Research, and the NIH though the National Eye Institute. Researchers report no conflicts of interest.


Source

Wednesday, March 9, 2011

Real March Madness Is Relying on Seedings to Determine Final Four

According to an operations research analysis model developed by Sheldon H. Jacobson, a professor of computer science and the director of the simulation and optimization laboratory at the University of Illinois, you're better off picking a combination of two top-seeded teams, a No. 2 seed and a No. 3 seed.

"There are patterns that exist in the seeds," Jacobson says."As much as we like to believe otherwise, the fact of the matter is that we've uncovered a model that captures this pattern. As a result of that, in spite of what we emotionally feel about teams or who's going to win, the reality is that the numbers trump all of these things," Jacobson said."It's more likely to be 1, 1, 2, 3 in the Final Four than four No. 1's."

Jacobson's model is unique in that it prognosticates not based on who the teams are, but on the seeds they hold. He describes his model in a forthcoming paper in the journal Omega with co-authors Alex Nikolaev, of the University of Buffalo; Adrian Lee, of CITERI (Central Illinois Technology and Education Research Institute); and Douglas King, a graduate student at Illinois.

Jacobson has also integrated the model into a user-friendly website to help March Madness fans determine the relative probability of their chosen team combinations appearing in the final rounds of the NCAA men's basketball tournament.

A number of websites offer assistance to budding bracketologists, such as game-by-game probabilities of certain match-ups or determining the spread on a given team reaching a particular point in the tournament. Jacobson's website is the only one to look at collective groups of seeds within the brackets.

"What we do is use the power of analytics to uncover trends in 'bracketology.' It really is a mathematical science," he said."What our model enables us to do is look at the likelihood or probability that a certain set of seed combinations will occur as we advance deeper into the tournament."

Jacobson's team applied a statistical method called goodness-of-fit testing to NCAA tournament data from 1985 to 2010, identifying patterns in seed distribution in the Elite Eight, Final Four and national championship rounds. They found that the seeds themselves exhibit certain statistical patterns, independent of the team. They then fit the pattern to a stochastic model they can use to assess probabilities and odds.

Two computer science undergraduates, Ammar Rizwan and Emon Dai, built the websitebracketodds.cs.illinois.edubased on Jacobson's model. The publicly accessible website will be up through the entire tournament. Users can evaluate their brackets and also can compare relative likelihood of two sets of seed combinations.

"For each of the rounds that we have available, you could put in what you have so far and even compare it to other possible sets," Rizwan said.

For example, the probability of the Final Four comprising the four top-seeded teams is 0.026, or once every 39 years. Meanwhile, the probability of a Final Four of all No. 16 seeds -- the lowest-seeded teams in the tournament -- is so small that it has a frequency of happening once every eight hundred trillion years. (The Milky Way contains an estimated one hundred billion stars.)

"Basically, if every star was given a year, the years it would take for this to occur is 8,000 times all the stars in the galaxy," Jacobson said."It gives you perspective."

However, sets with long odds do happen. The most unlikely combination in the 26 years studied occurred in 2000, with a Final Four seed combination of 1, 5, 8 and 8. But such a bracket is only predicted to happen once every 32,000 years, so those filling out brackets at home shouldn't hope for a repeat.

What amateur bracketologists can be confident of is upsets. For even the most probable Final Four combination of 1,1,2,3 to occur, two top-seeded schools have to lose.

"In fact, upsets occur with great frequency and great predictability. If you look statistically, there's a certain number of upsets that occur in each round. We just don't know which team they're going to be or when they're going to occur," Jacobson said.

After the 2011 tournament, and in years to come, Jacobson will integrate the new data into the model to continually refine its prediction power. For 2012, Jacobson, Rizwan and Dai hope to integrate a comparative probability feature into the website to allow users to calculate, for example, the probability of a particular set of Final Four seeds if the Elite Eight seeds are given.

Until then, users can find out how likely their picks really are, and compare them against friends' picks -- or even sports commentators'.

"We're not here specifically to say 'Syracuse is going to beat Kentucky in the Elite Eight.' What we're saying is that the seed numbers have patterns," Jacobson said."A 1, 1, 2, 3 is the most likely Final Four. I don't know which two 1's, I don't know which No. 2 and I don't know which No. 3. But I can tell you that if you want to go purely with the odds, choose a Final Four with seeds 1, 1, 2, 3."


Source

Tuesday, March 8, 2011

How Can Robots Get Our Attention?

The research is being presented March 8 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"The primary focus was trying to give Simon, our robot, the ability to understand when a human being seems to be reacting appropriately, or in some sense is interested now in a response with respect to Simon and to be able to do it using a visual medium, a camera," said Aaron Bobick, professor and chair of the School of Interactive Computing in Georgia Tech's College of Computing.

Using the socially expressive robot Simon, from Assistant Professor Andrea Thomaz's Socially Intelligent Machines lab, researchers wanted to see if they could tell when he had successfully attracted the attention of a human who was busily engaged in a task and when he had not.

"Simon would make some form of a gesture, or some form of an action when the user was present, and the computer vision task was to try to determine whether or not you had captured the attention of the human being," said Bobick.

With close to 80 percent accuracy Simon was able to tell, using only his cameras as a guide, whether someone was paying attention to him or ignoring him.

"We would like to bring robots into the human world. That means they have to engage with human beings, and human beings have an expectation of being engaged in a way similar to the way other human beings would engage with them," said Bobick.

"Other human beings understand turn-taking. They understand that if I make some indication, they'll turn and face someone when they want to engage with them and they won't when they don't want to engage with them. In order for these robots to work with us effectively, they have to obey these same kinds of social conventions, which means they have to perceive the same thing humans perceive in determining how to abide by those conventions," he added.

Researchers plan to go further with their investigations into how Simon can read communication cues by studying whether he can tell by a person's gaze whether they are paying attention or using elements of language or other actions.

"Previously people would have pre-defined notions of what the user should do in a particular context and they would look for those," said Bobick."That only works when the person behaves exactly as expected. Our approach, which I think is the most novel element, is to use the user's current behavior as the baseline and observe what changes."

The research team for this study consisted of Bobick, Thomaz, doctoral student Jinhan Lee and undergraduate student Jeffrey Kiser.


Source

Monday, March 7, 2011

Reconfigurable Supercomputing Outperforms Rivals in Important Science Applications

In November, the TOP500 list of the world's most powerful supercomputers, for the first time ever, named the Chinese Tianhe-1A system at the National Computer Center in Tainjin, China as No. 1.

In his state of the union speech, President Barack Obama noted,"Just recently, China became home of the world's largest solar research facility, and the world's fastest computer."

But that list does not include reconfigurable supercomputers such as Novo-G, built and developed at the University of Florida, said Alan George, professor of electrical and computer engineering, and director of the National Science Foundation's Center for High-Performance Reconfigurable Computing, known as CHREC.

"Novo-G is believed to be the most powerful reconfigurable machine on the planet and, for some applications, it is the most powerful computer of any kind on the planet," George said.

"It is very difficult to accurately rank supercomputers because it depends upon what you want them to do," George said, adding that the TOP500 list ranks supercomputers by their performance on a few basic routines in linear algebra using 64-bit, floating-point arithmetic.

However, a significant number of the most important applications in the world do not adhere to that standard, including a growing list of vital applications in health and life sciences, signal and image processing, financial science, and more under study with Novo-G at Florida.

Most of the world's computers, from smart-phones to laptops to Tianhe-1A, feature microprocessors with fixed-logic hardware structures. All software applications for these systems must conform to these fixed structures, which can lead to a significant loss in speed and increase in energy consumption.

By contrast, with reconfigurable machines, a relatively new and highly innovative form of computing, the architecture can adapt to match the unique needs of each application, which can lead to much faster speed and less wasted energy due to adaptive hardware customization.

Novo-G uses 192 reconfigurable processors and"can rival the speed of the world's largest supercomputers at a tiny fraction of their cost, size, power, and cooling," the researchers noted in a new article on Novo-G published in the January-February edition ofIEEE Computing in Science and Engineeringmagazine.

Conventional supercomputers, some the size of a large building, can consume up to millions of watts of electrical power, generating massive amounts of heat, whereas Novo-G is about the size of two home refrigerators and consumes less than 8,000 watts.

Later this year, researchers will double the reconfigurable capacity of Novo-G, an upgrade only requiring a modest increase in size, power, and cooling, unlike upgrades with conventional supercomputers.

In their article, the researchers discuss Novo-G and its obvious advantages for use in certain applications such as genome research, cancer diagnosis, plant science, and the ability to analyze large data sets.

Herman Lam, an electrical and computer engineering professor and co-investigator on Novo-G, said some vital science applications that can take months or years to run on a personal computer can run in minutes or hours on the Novo-G, such as applications for DNA sequence alignment at UF's Interdisciplinary Center for Biotechnology Research.

CHREC includes research sites at four universities including Florida, Brigham Young, George Washington and Virginia Tech. In addition, there are more than 30 partners in CHREC, such as the U.S. Air Force, Army, and Navy, NASA, National Security Agency, Boeing, Honeywell, Lockheed Martin, Monsanto, Northrop Grumman, and the Los Alamos, Oak Ridge and Sandia National Labs.


Source

Sunday, March 6, 2011

World's First Anti-Laser Built

Conventional lasers, which were first invented in 1960, use a so-called"gain medium," usually a semiconductor like gallium arsenide, to produce a focused beam of coherent light -- light waves with the same frequency and amplitude that are in step with one another.

Last summer, Yale physicist A. Douglas Stone and his team published a study explaining the theory behind an anti-laser, demonstrating that such a device could be built using silicon, the most common semiconductor material. But it wasn't until now, after joining forces with the experimental group of his colleague Hui Cao, that the team actually built a functioning anti-laser, which they call a coherent perfect absorber (CPA).

The team, whose results appear in the Feb. 18 issue of the journalScience, focused two laser beams with a specific frequency into a cavity containing a silicon wafer that acted as a"loss medium." The wafer aligned the light waves in such a way that they became perfectly trapped, bouncing back and forth indefinitely until they were eventually absorbed and transformed into heat.

Stone believes that CPAs could one day be used as optical switches, detectors and other components in the next generation of computers, called optical computers, which will be powered by light in addition to electrons. Another application might be in radiology, where Stone said the principle of the CPA could be employed to target electromagnetic radiation to a small region within normally opaque human tissue, either for therapeutic or imaging purposes.

Theoretically, the CPA should be able to absorb 99.999 percent of the incoming light. Due to experimental limitations, the team's current CPA absorbs 99.4 percent."But the CPA we built is just a proof of concept," Stone said."I'm confident we will start to approach the theoretical limit as we build more sophisticated CPAs." Similarly, the team's first CPA is about one centimeter across at the moment, but Stone said that computer simulations have shown how to build one as small as six microns (about one-twentieth the width of an average human hair).

The team that built the CPA, led by Cao and another Yale physicist, Wenjie Wan, demonstrated the effect for near-infrared radiation, which is slightly"redder" than the eye can see and which is the frequency of light that the device naturally absorbs when ordinary silicon is used. But the team expects that, with some tinkering of the cavity and loss medium in future versions, the CPA will be able to absorb visible light as well as the specific infrared frequencies used in fiber optic communications.

It was while explaining the complex physics behind lasers to a visiting professor that Stone first came up with the idea of an anti-laser. When Stone suggested his colleague think about a laser working in reverse in order to help him understand how a conventional laser works, Stone began contemplating whether it was possible to actually build a laser that would work backwards, absorbing light at specific frequencies rather than emitting it.

"It went from being a useful thought experiment to having me wondering whether you could really do that," Stone said."After some research, we found that several physicists had hinted at the concept in books and scientific papers, but no one had ever developed the idea."


Source

Saturday, March 5, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Thursday, March 3, 2011

New Developments in Quantum Computing

At the Association for Computing Machinery's 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can't. Although building the experimental apparatus would be difficult, it shouldn't be as difficult as building a fully functional quantum computer.

Aaronson and Arkhipov's proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles -- photons -- reach the beam splitter at exactly the same time, they will both go either right or left; they won't take different paths. It's another quantum behavior of fundamental particles that defies our physical intuitions.

The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons -- about 36 detectors for six photons, 100 detectors for 10 photons.

For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there's an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they'll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.

Calculating that distribution -- the likelihood of photons striking a given combination of detectors -- is a hard problem. The researchers' experiment doesn't solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov's paper is that, not only is calculating the distribution a hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.

The question, then, is whether the experiment can be successfully executed. The Rochester researchers performed it with two photons, but getting multiple photons to arrive at a whole sequence of beam splitters at exactly the right time is more complicated. Barry Sanders, director of the University of Calgary's Institute for Quantum Information Science, points out that in 1987, when the Rochester researchers performed their initial experiment, they were using lasers mounted on lab tables and getting photons to arrive at the beam splitter simultaneously by sending them down fiber-optic cables of different lengths. But recent years have seen the advent of optical chips, in which all the optical components are etched into a silicon substrate, which makes it much easier to control the photons' trajectories.

The biggest problem, Sanders believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters."People have been working on it for a decade, making great things," Sanders says."But getting a train of single photons is still a challenge."

Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem. Although they were unable to prove that it was, Aaronson says that"most of our paper is devoted to giving evidence that the answer to that is yes." He's hopeful that a proof is forthcoming, whether from his research group or others'.


Source

Wednesday, March 2, 2011

Plug-and-Play Multi-Core Voltage Regulator Could Lead to 'Smarter' Smartphones, Slimmer Laptops and Energy-Friendly Data Centers

Today's consumers expect mobile devices that are increasingly small, yet ever-more powerful. All the bells and whistles, however, suck up energy, and a phone that lasts only 4 hours because it's also a GPS device is only so much use.

To promote energy-efficient multitasking, Harvard graduate student Wonyoung Kim has developed and demonstrated a new device with the potential to reduce the power usage of modern processing chips.

The advance could allow the creation of"smarter" smartphones, slimmer laptops, and more energy-friendly data centers.

Kim's on-chip, multi-core voltage regulator (MCVR) addresses what amounts to a mismatch between power supply and demand.

"If you're listening to music on your MP3 player, you don't need to send power to the image and graphics processors at the same time," Kim says."If you're just looking at photos, you don't need to power the audio processor or the HD video processor."

"It's like shutting off the lights when you leave the room."

Kim's research at Harvard's School of Engineering and Applied Sciences (SEAS) showed in 2008 that fine-grain voltage control was a theoretical possibility. This month, he presented a paper at the Institute of Electrical and Electronics Engineers' (IEEE) International Solid-State Circuits Conference (ISSCC) showing that the MCVR could actually be implemented in hardware.

Essentially a DC-DC converter, the MCVR can take a 2.4-volt input and scale it down to voltages ranging from 0.4 to 1.4V. Built for speed, it can increase or decrease the output by 1V in under 20 nanoseconds.

The MCVR also uses an algorithm to recognize parts of the processor that are not in use and cuts power to them, saving energy. Kim says it results in a longer battery life (or, in the case of stationary data centers, lower energy bills), while providing the same performance.

The on-chip design means that the power supply can be managed not just for each processor chip, but for each individual core on the chip. The short distance that signals then have to travel between the voltage regulator and the cores allows power scaling to happen quickly -- in a matter of nanoseconds rather than microseconds -- further improving efficiency.

Kim has obtained a provisional patent for the MCVR with his Ph.D. co-advisers at SEAS, Gu-Yeon Wei, Gordon McKay Professor of Electrical Engineering, and David Brooks, Gordon McKay Professor of Computer Science, who are coauthors on the paper he presented this week.

"Wonyoung Kim's research takes an important step towards a higher level of integration for future chips," says Wei."Systems today rely on off-chip, board-level voltage regulators that are bulky and slow. Integrating the voltage regulator along with the IC chip to which it supplies power not only reduces broad-level size and cost, but also opens up exciting opportunities to improve energy efficiency."

"Kim's three-level design overcomes issues that hamper traditional buck and switch-capacitor converters by merging good attributes of both into a single structure," adds Brooks."We believe research on integrated voltage regulators like Kim's will be an essential component of future computing devices where energy-efficient performance and low cost are in demand."

Although Kim estimates that the greatest demand for the MCVR right now could be in the market for mobile phones, the device would also have applications in other computing scenarios. Used in laptops, the MCVR might reduce the heat output of the processor, which is currently one barrier to making slimmer notebooks. In stationary scenarios, the rising cost of powering servers of ever-increasing speed and capacity could be reduced.

"This is a plug-and-play device in the sense that it can be easily incorporated into the design of processor chips," says Kim."Including the MCVR on a chip would add about 10 percent to the manufacturing cost, but with the potential for 20 percent or more in power savings."

The research was supported by the National Science Foundation's Division of Computer and Network Systems and Division of Computing and Communication Foundations.


Source

Tuesday, March 1, 2011

New Generation of Optical Integrated Devices for Future Quantum Computers

Quantum computers, holding the great promise of tremendous computational power for particular tasks, have been the goal of worldwide efforts by scientists for several years. Tremendous advances have been made but there is still a long way to go.

Building a quantum computer will require a large number of interconnected components -- gates -- which work in a similar way to the microprocessors in current personal computers. Currently, most quantum gates are large structures and the bulky nature of these devices prevents scalability to the large and complex circuits required for practical applications.

Recently, the researchers from the University of Bristol's Centre for Quantum Photonics showed, in several important breakthroughs, that quantum information can be manipulated with integrated photonic circuits. Such circuits are compact (enabling scalability) and stable (with low noise) and could lead in the near future to mass production of chips for quantum computers.

Now the team, in collaboration with Dr Terry Rudolph at Imperial College, London, shows a new class of integrated divides that promise further reduction in the number of components that will be used for building future quantum circuits.

These devices, based on optical multimode interference (and therefore often called MMIs) have been widely employed in classical optics as they are compact and very robust to fabrication tolerances."While building a complex quantum network requires a large number of basic components, MMIs can often enable the implementation with much fewer resources," said Alberto Peruzzo, PhD student working on the experiment.

Until now it was not clear how these devices would work in the quantum regime. Bristol researchers have demonstrated that MMIs can perform quantum interference at the high fidelity required.

Scientists will now be able to implement more compact photonics circuits for quantum computing. MMIs can generate large entangled states, at the heart of the exponential speedup promised by quantum computing.

"Applications will range from new circuits for quantum computation to ultra precise measurement and secure quantum communication," said Professor Jeremy O'Brien, director of the Centre for Quantum Photonics.

The team now plans to build new sophisticated circuits for quantum computation and quantum metrology using MMI devices.


Source