Friday, December 31, 2010

New Cognitive Robotics Lab Tests Theories of Human Thought

"The real world has a lot of inconsistency that humans handle almost without noticing -- for example, we walk on uneven terrain, we see in shifting light," said Professor Vladislav Daniel Veksler, who is currently teaching Cognitive Robotics."With robots, we can see the problems humans face when navigating their environment."

Cognitive Robotics marries the study of cognitive science -- how the brain represents and transforms information -- with the challenges of a physical environment. Advances in cognitive robotics transfer to artificial intelligence, which seeks to develop more efficient computer systems patterned on the versatility of human thought.

Professor Bram Van Heuveln, who organized the lab, said cognitive scientists have developed a suite of elements -- perception/action, planning, reasoning, memory, decision-making -- that are believed to constitute human thought. When properly modeled and connected, those elements are capable of solving complex problems without the raw power required by precise mathematical computations.

"Suppose we wanted to build a robot to catch fly balls in an outfield. There are two approaches: one uses a lot of calculations -- Newton's law, mechanics, trigonometry, calculus -- to get the robot to be in the right spot at the right time," said Van Heuveln."But that's not the way humans do it. We just keep moving toward the ball. It's a very simple solution that doesn't involve a lot of computation but it gets the job done."

Robotics are an ideal testing ground for that principle because robots act in the real world, and a correct cognitive solution will withstand the unexpected variables presented by the real world.

"The physical world can help us to drive science because it's different from any simulated world we could come up with -- the camera shakes, the motors slip, there's friction, the light changes," Veksler said."This platform -- robotics -- allows us to see that you can't rely on calculations. You have to be adaptive."

The lab is open to all students at Rensselaer. In its first semester, the lab has largely attracted computer science and cognitive science students enrolled in a Cognitive Robotics course taught by Veksler, but Veksler and Van Heuveln hope it will attract more engineering and art students as word of the facility spreads.

"We want different students together in one space -- a place where we can bring the different disciplines and perspectives together," said Van Heuveln."I would like students to use this space for independent research: they come up with the research project, they say 'let's look at this.'"

The lab is equipped with five"Create" robots -- essentially a Roomba robotic vacuum cleaner paired with a laptop; three hand-eye systems; one Chiara (which looks like a large metal crab); and 10 LEGO robots paired with the Sony Handy Board robotic controller.

On a recent day, Jacqui Brunelli and Benno Lee were working on their robot"cat" and"mouse" pair, which try to chase and evade each other respectively; Shane Reilly was improving the computer"vision" of his robotic arm; and Ben Ball was programming his robot to maintain a fixed distance from a pink object waved in front of its"eye."

"The thing that I've learned is that the sensor data isn't exact -- what it 'sees' constantly changes by a few pixels -- and to try to go by that isn't going to work," said Ball, a junior and student of computer science and physics.

Ball said he is trying to pattern his robot on a more human approach.

"We don't just look at an object and walk toward it. We check our position, adjusting our course," Ball said."I need to devise an iterative approach where the robot looks at something, then moves, then looks again to check its results."

The work of the students, who program their robots with the Tekkotsu open-source software, could be applied in future projects, said Van Heuveln.

"As a cognitive scientist, I want this to be built on elements that are cognitively plausible and that are recyclable -- parts of cognition that I can apply to other solutions as well," said Van Heuveln."To me, that's a heck of a lot more interesting than the computational solution."

In a generic domain, their early investigations clearly show how a more cognitive approach employing limited resources can easily outpace more powerful computers using a brute force approach, said Veksler.

"We look to humans not just because we want to simulate what we do, which is an interesting problem in itself, but also because we're smart," said Veksler."Some of the things we have, like limited working memory -- which may seem like a bad thing -- are actually optimal for solving problems in our environment. If you remembered everything, how would you know what's important?"


Source

Friday, December 3, 2010

New Psychology Theory Enables Computers to Mimic Human Creativity

Solving this"insight problem" requires creativity, a skill at which humans excel (the coin is a fake --"B.C." and Arabic numerals did not exist at the time) and computers do not. Now, a new explanation of how humans solve problems creatively -- including the mathematical formulations for facilitating the incorporation of the theory in artificial intelligence programs -- provides a roadmap to building systems that perform like humans at the task.

Ron Sun, Rensselaer Polytechnic Institute professor of cognitive science, said the new"Explicit-Implicit Interaction Theory," recently introduced in an article inPsychological Review, could be used for future artificial intelligence.

"As a psychological theory, this theory pushes forward the field of research on creative problem solving and offers an explanation of the human mind and how we solve problems creatively," Sun said."But this model can also be used as the basis for creating future artificial intelligence programs that are good at solving problems creatively."

The paper, titled"Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model," by Sun and Sèbastien Hèlie of University of California, Santa Barbara, appeared in the July edition ofPsychological Review. Discussion of the theory is accompanied by mathematical specifications for the"CLARION" cognitive architecture -- a computer program developed by Sun's research group to act like a cognitive system -- as well as successful computer simulations of the theory.

In the paper, Sun and Hèlie compare the performance of the CLARION model using"Explicit-Implicit Interaction" theory with results from previous human trials -- including tests involving the coin question -- and found results to be nearly identical in several aspects of problem solving.

In the tests involving the coin question, human subjects were given a chance to respond after being interrupted either to discuss their thought process or to work on an unrelated task. In that experiment, 35.6 percent of participants answered correctly after discussing their thinking, while 45.8 percent of participants answered correctly after working on another task.

In 5,000 runs of the CLARION program set for similar interruptions, CLARION answered correctly 35.3 percent of the time in the first instance, and 45.3 percent of the time in the second instance.

"The simulation data matches the human data very well," said Sun.

Explicit-Implicit Interaction theory is the most recent advance on a well-regarded outline of creative problem solving known as"Stage Decomposition," developed by Graham Wallas in his seminal 1926 book"The Art of Thought." According to stage decomposition, humans go through four stages -- preparation, incubation, insight (illumination), and verification -- in solving problems creatively.

Building on Wallas' work, several disparate theories have since been advanced to explain the specific processes used by the human mind during the stages of incubation and insight. Competing theories propose that incubation -- a period away from deliberative work -- is a time of recovery from fatigue of deliberative work, an opportunity for the mind to work unconsciously on the problem, a time during which the mind discards false assumptions, or a time in which solutions to similar problems are retrieved from memory, among other ideas.

Each theory can be represented mathematically in artificial intelligence models. However, most models choose between theories rather than seeking to incorporate multiple theories and therefore they are fragmentary at best.

Sun and Hèlie's Explicit-Implicit Interaction (EII) theory integrates several of the competing theories into a larger equation.

"EII unifies a lot of fragmentary pre-existing theories," Sun said."These pre-existing theories only account for some aspects of creative problem solving, but not in a unified way. EII unifies those fragments and provides a more coherent, more complete theory."

The basic principles of EII propose the coexistence of two different types of knowledge and processing: explicit and implicit. Explicit knowledge is easier to access and verbalize, can be rendered symbolically, and requires more attention to process. Implicit knowledge is relatively inaccessible, harder to verbalize, and is more vague and requires less attention to process.

In solving a problem, explicit knowledge could be the knowledge used in reasoning, deliberately thinking through different options, while implicit knowledge is the intuition that gives rise to a solution suddenly. Both types of knowledge are involved simultaneously to solve a problem and reinforce each other in the process. By including this principle in each step, Sun was able to achieve a successful system.

"This tells us how creative problem solving may emerge from the interaction of explicit and implicit cognitive processes; why both types of processes are necessary for creative problem solving, as well as in many other psychological domains and functionalities," said Sun.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


Source

Thursday, December 2, 2010

Breakthrough Chip Technology Lights Path to Exascale Computing: Optical Signals Connect Chips Together Faster and With Lower Power

The new technology, called CMOS Integrated Silicon Nanophotonics, is the result of a decade of development at IBM's global research laboratories. The patented technology will change and improve the way computer chips communicate -- by integrating optical devices and functions directly onto a silicon chip, enabling over 10X improvement in integration density than is feasible with current manufacturing techniques.

IBM anticipates that Silicon Nanophotonics will dramatically increase the speed and performance between chips, and further the company's ambitious exascale computing program, which is aimed at developing a supercomputer that can perform one million trillion calculations -- or an exaflop -- in a single second. An exascale supercomputer will be approximately one thousand times faster than the fastest machine today.

"The development of the Silicon Nanophotonics technology brings the vision of on-chip optical interconnections much closer to reality," said Dr. T.C. Chen, vice president, Science and Technology, IBM Research."With optical communications embedded into the processor chips, the prospect of building power-efficient computer systems with performance at the exaflop level is one step closer to reality."

In addition to combining electrical and optical devices on a single chip, the new IBM technology can be produced on the front-end of a standard CMOS manufacturing line and requires no new or special tooling. With this approach, silicon transistors can share the same silicon layer with silicon nanophotonics devices. To make this approach possible, IBM researchers have developed a suite of integrated ultra-compact active and passive silicon nanophotonics devices that are all scaled down to the diffraction limit -- the smallest size that dielectric optics can afford.

"Our CMOS Integrated Nanophotonics breakthrough promises unprecedented increases in silicon chip function and performance via ubiquitous low-power optical communications between racks, modules, chips or even within a single chip itself," said Dr. Yurii A. Vlasov, Manager of the Silicon Nanophotonics Department at IBM Research."The next step in this advancement is to establishing manufacturability of this process in a commercial foundry using IBM deeply scaled CMOS processes."

By adding just a few more processing modules to a standard CMOS fabrication flow, the technology enables a variety of silicon nanophotonics components, such as: modulators, germanium photodetectors and ultra-compact wavelength-division multiplexers to be integrated with high-performance analog and digital CMOS circuitry. As a result, single-chip optical communications transceivers can now be manufactured in a standard CMOS foundry, rather than assembled from multiple parts made with expensive compound semiconductor technology.

The density of optical and electrical integration demonstrated by IBM's new technology is unprecedented -- a single transceiver channel with all accompanying optical and electrical circuitry occupies only 0.5mm2-- 10 times smaller than previously announced by others. The technology is amenable for building single-chip transceivers with area as small as 4x4mm2that can receive and transmit over Terabits per second that is over a trillion bits per second.

The development of CMOS Integrated Silicon Nanophotonics is the culmination of a series of related advancements by IBM Research that resulted in the development of deeply scaled front-end integrated Nanophotonics components for optical communications. These milestones include:

  • March 2010, IBM announced a Germanium Avalanche Photodetector working at unprecedented 40Gb/s with CMOS compatible voltages as low as 1.5V. This was the last piece of the puzzle that completes the prior development of the"nanophotonics toolbox" of devices necessary to build the on-chip interconnects.
  • March 2008, IBM scientists announced the world's tiniest nanophotonic switch for"directing traffic" in on-chip optical communications, ensuring that optical messages can be efficiently routed.
  • December 2007, IBM scientists announced the development of an ultra-compact silicon electro-optic modulator, which converts electrical signals into the light pulses, a prerequisite for enabling on-chip optical communications.
  • December 2006, IBM scientists demonstrated silicon nanophotonic delay line that was used to buffer over a byte of information encoded in optical pulses -- a requirement for building optical buffers for on-chip optical communications.

The details and results of this research effort was reported in a presentation delivered by Dr. Yurii Vlasov at the major international semiconductor industry conference SEMICON held in Tokyo on Dec. 1, 2010. The talk is entitled"CMOS Integrated Silicon Nanophotonics: Enabling Technology for Exascale Computational Systems," co-authored by William Green, Solomon Assefa, Alexander Rylyakov, Clint Schow, Folkert Horst, and Yurii Vlasov of IBM's T.J. Watson Research Center in Yorktown Heights, N.Y. and IBM Zurich Research Lab in Rueschlikon, Switzerland.

Additional information on the project can be found athttp://www.research.ibm.com/photonics.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


Source

Tuesday, November 30, 2010

Genomic Fault Zones Come and Go: Fragile Regions in Mammalian Genomes Go Through 'Birth and Death' Process

"The genomic architecture of every species on Earth changes on the evolutionary time scale and humans are not an exception. What will be the next big change in the human genome remains unknown, but our approach could be useful in determining where in the human genome those changes may occur," said Pavel Pevzner, a UC San Diego computer science professor and an author on the new study. Pevzner studies genomes and genome evolution from a computational perspective in the Department of Computer Science and Engineering at the UC San Diego Jacobs School of Engineering.

The fragile regions of genomes are prone to"genomic earthquakes" that can trigger chromosome rearrangements, disrupt genes, alter gene regulation and otherwise play an important role in genome evolution and the emergence of new species. For example, humans have 23 chromosomes while some other apes have 24 chromosomes, a consequence of a genome rearrangement that fused two chromosomes in our ape ancestor into human chromosome 2.

This work was performed by Pevzner and Max Alekseyev -- a computer scientist who recently finished his Ph.D. in the Department of Computer Science and Engineering at the UC San Diego Jacobs School of Engineering. Alekseyev is now a computer science professor at the University of South Carolina.

Turnover Fragile Breakage Model

"The main conclusion of the new paper is that these fragile regions are moving," said Pevzner.

In 2003, Pevzner and UC San Diego mathematics professor Glen Tesler published results claiming that genomes have"fault zones" or genomic regions that are more prone to rearrangements than other regions. Their"Fragile Breakage Model" countered the then largely accepted"Random Breakage Model" -- which implies that there are no rearrangement hotspots in mammalian genomes. While the Fragile Breakage Model has been supported by many studies in the last seven years, the precise locations of fragile regions in the human genome remain elusive.

The new work published inGenome Biologyoffers an update to the Fragile Breakage Model called the"Turnover Fragile Breakage Model." The findings demonstrate that the fragile regions undergo a birth and death process over evolutionary timescales and provide a clue to where the fragile regions in the human genome are located.

Do the Math: Find Fragile Regions

Finding the fragile regions within genomes is akin to looking at a mixed up deck of cards and trying to determine how many times it has been shuffled.

Looking at a genome, you may identify breaks, but to say it is a fragile region, you have to know that breaks occurred more than once at the same genomic position."We are figuring out which regions underwent multiple genome earthquakes by analyzing the present-day genomes that survived these earthquakes that happened millions of years ago. The notion of rearrangements cannot be applied to a single genome at a single point in time. It's relevant when looking at more than one genome," said Pevzner, explaining the comparative genomics approach they took.

"It was noticed that while fragile regions may be shared across different genomes, most often such shared fragile regions are found in evolutionarily close genomes. This observation led us to a conclusion that fragility of any particular genomic position may appear only for a limited amount of time. The newly proposed Turnover Fragile Breakage Model postulates that fragile regions are subject to a 'birth and death' process and thus have limited lifespan," explained Alekseyev.

The Turnover Fragile Breakage Model suggests that genome rearrangements are more likely to occur at the sites where rearrangements have recently occurred -- and that these rearrangement sites change over tens of millions of years. Thus, the best clue to the current locations of fragile regions in the human genome is offered by rearrangements that happened in our closest ancestors -- chimpanzee and other primates.

Pevzner is eagerly awaiting sequenced primate genomes from the Genome 10K Project. Sequencing the genomes of 10,000 vertebrate species -- including 100s of primates -- is bound to provide new insights on human evolutionary history and possibly even the future rearrangements in the human genome.

"The most likely future rearrangements in human genome will happen at the sites that were recently disrupted in primates," said Pevzner.

Work tied to the new Turnover Fragile Breakage Model may also be useful for understanding genome rearrangements at the level of individuals, rather than entire species. In the future, the computer scientists hope to use similar tools to look at the chromosomal rearrangements that occur within the cells of individual cancer patients over and over again in order to develop new cancer diagnostics and drugs.

Pavel Pevzner is the Ronald R. Taylor Professor of Computer Science at UC San Diego; Director of the NIH Center for Computational Mass Spectrometry; and a Howard Hughes Medical Institute (HHMI) Professor.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


Source

Monday, November 29, 2010

'Racetrack' Magnetic Memory Could Make Computer Memory 100,000 Times Faster

Annoyed by how long it took his computer to boot up, Kläui began to think about an alternative. Hard disks are cheap and can store enormous quantities of data, but they are slow; every time a computer boots up, 2-3 minutes are lost while information is transferred from the hard disk into RAM (random access memory). The global cost in terms of lost productivity and energy consumption runs into the hundreds of millions of dollars a day.

Like the tried and true VHS videocassette, the proposed solution involves data recorded on magnetic tape. But the similarity ends there; in this system the tape would be a nickel-iron nanowire, a million times smaller than the classic tape. And unlike a magnetic videotape, in this system nothing moves mechanically. The bits of information stored in the wire are simply pushed around inside the tape using a spin polarized current, attaining the breakneck speed of several hundred meters per second in the process. It's like reading an entire VHS cassette in less than a second.

In order for the idea to be feasible, each bit of information must be clearly separated from the next so that the data can be read reliably. This is achieved by using domain walls with magnetic vortices to delineate two adjacent bits. To estimate the maximum velocity at which the bits can be moved, Kläui and his colleagues* carried out measurements on vortices and found that the physical mechanism could allow for possible higher access speeds than expected.

Their results were published online October 25, 2010, in the journalPhysical Review Letters. Scientists at the Zurich Research Center of IBM (which is developing a racetrack memory) have confirmed the importance of the results in a Viewpoint article. Millions or even billions of nanowires would be embedded in a chip, providing enormous capacity on a shock-proof platform. A market-ready device could be available in as little as 5-7 years.

Racetrack memory promises to be a real breakthrough in data storage and retrieval. Racetrack-equipped computers would boot up instantly, and their information could be accessed 100,000 times more rapidly than with a traditional hard disk. They would also save energy. RAM needs to be powered every millionth of a second, so an idle computer consumes up to 300 mW just maintaining data in RAM. Because Racetrack memory doesn't have this constraint, energy consumption could be slashed by nearly a factor of 300, to a few mW while the memory is idle. It's an important consideration: computing and electronics currently consumes 6% of worldwide electricity, and is forecast to increase to 15% by 2025.


Source

Sunday, November 28, 2010

Supercomputing Center Breaks the Petaflops Barrier

NERSC's newest supercomputer, a 153,408 processor-core Cray XE6 system, posted a performance of 1.05 petaflops (quadrillions of calculations per second) running the Linpack benchmark. In keeping with NERSC's tradition of naming computers for renowned scientists, the system is named Hopper in honor of Admiral Grace Hopper, a pioneer in software development and programming languages.

NERSC serves one of the largest research communities of all supercomputing centers in the United States. The center's supercomputers are used to tackle a wide range of scientific challenges, including global climate change, combustion, clean energy, new materials, astrophysics, genomics, particle physics and chemistry. The more than 400 projects being addressed by NERSC users represent the research mission areas of DOE's Office of Science.

The increasing power of supercomputers helps scientists study problems in greater detail and with greater accuracy, such as increasing the resolution of climate models and creating models of new materials with thousands of atoms. Supercomputers are increasingly used to compliment scientific experimentation by allowing researchers to test theories using computational models and analyzed large scientific data sets. NERSC is also home to Franklin, a 38,128 core Cray XT4 supercomputer with a Linpack performance of 266 teraflops (trillions of calculations per second). Franklin is ranked number 27 on the newest TOP500 list.

The system, installed d in September 2010, is funded by DOE's Office of Advanced Scientific Computing Research.


Source

Saturday, November 27, 2010

'Space-Time Cloak' to Conceal Events

Previously, a team led by Professor Sir John Pendry at Imperial College London showed that metamaterials could be used to make an optical invisibility cloak. Now, a team led by Professor Martin McCall has mathematically extended the idea of a cloak that conceals objects to one that conceals events.

"Light normally slows down as it enters a material, but it is theoretically possible to manipulate the light rays so that some parts speed up and others slow down," says McCall, from the Department of Physics at Imperial College London. When light is 'opened up' in this way, rather than being curved in space, the leading half of the light speeds up and arrives before an event, whilst the trailing half is made to lag behind and arrives too late. The result is that for a brief period the event is not illuminated, and escapes detection. Once the concealed passage has been used, the cloak can then be 'closed' seamlessly.

Such a space-time cloak would open up a temporary corridor through which energy, information and matter could be manipulated or transported undetected."If you had someone moving along the corridor, it would appear to a distant observer as if they had relocated instantaneously, creating the illusion of a Star-Trek transporter," says McCall."So, theoretically, this person might be able to do something and you wouldn't notice!"

While using the spacetime cloak to make people move undetected is still science fiction, there are many serious applications for the new research, which was funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Leverhulme Trust. Co-author Dr Paul Kinsler developed a proof of concept design using customised optical fibres, which would enable researchers to use the event cloak in signal processing and computing. A given data channel could for example be interrupted to perform a priority calculation on a parallel channel during the cloak operation. Afterwards, it would appear to external parts of the circuit as though the original channel had processed information continuously, so as to achieve 'interrupt-without-interrupt'.

Alberto Favaro, who also worked on the project, explains:"Imagine computer data moving down a channel to be like a highway full of cars. You want to have a pedestrian crossing without interrupting the traffic, so you slow down the cars that haven't reached the crossing, while the cars that are at or beyond the crossing get sped up, which creates a gap in the middle for the pedestrian to cross. Meanwhile an observer down the road would only see a steady stream of traffic." One issue that cropped up during their calculations was to speed up the transmitted data without violating the laws of relativity. Favaro solved this by devising a clever material whose properties varied in both space and time, allowing the cloak to be formed.

"We're sure that there are many other possibilities opened up by our introduction of the concept of the spacetime cloak,' says McCall,"but as it's still theoretical at this stage we still need to work out the concrete details for our proposed applications."

Metamaterials is an expanding field of science, with a vast array of potential uses, spanning defence, security, medicine, data transfer and computing. Many ordinary household devices that work using electromagnetic fields could be made more cheaply or to work at higher speeds. Metamaterials could also be used to control other types of waves as well as light, such as sound or water waves, opening up potential applications for protecting coastal or offshore installations, or even engineering buildings to withstand earthquake waves.


Source

Friday, November 26, 2010

Intelligent Detector Provides Real-Time Information on Available Parking Spaces

Testing of the new technology is currently underway at the Universitat Politècnica de Catalunya's North Campus, and a patent is being sought. The system can be used to provide users with information via mobile devices such as phones, laptop computers, and iPads, or using luminous panels in public thoroughfares. In the coming months it will be installed in the 22@Barcelona innovation district and in downtown Figueres.

A team at the Department of Electronic Engineering of the Castelldefels School of Telecommunications and Aerospace Engineering (EETAC), part of the Universitat Politècnica de Catalunya (UPC), has designed a new method for continuously detecting the presence of vehicles using both an optical and a magnetic sensor. The detector incorporates the two sensors in a 4 by 13 cm casing that is set into the pavement of each parking space. Urbiòtica, a company set up by UPC professors and their industrial partners, is testing the system at the UPC's North Campus prior to placing it on the market.

The device works by first detecting the sudden change in the amount of light reaching the pavement that occurs when a vehicle passes over it. The optical sensor then activates the magnetic sensor to verify that the shadow is being produced by a vehicle. This is done by detecting the slight disturbance in Earth's magnetic field that occurs when a car passes over or stops above the device. The two sensors are connected to a microcontroller that executes an algorithm to determine whether or not a vehicle is present. The system's optical sensor is always active but consumes an insignificant amount of power.

When a vehicle is detected, the microcontroller sends a radio-frequency signal, which conveys this information to an antenna connected to a transceiver. This way of transmitting signals is much more economical than using wiring. The transceiver, designed for installation on street lights, receives the information and transmits it to the database or control center within seconds (using technologies such as Wi-Fi or GPRS). Potential clients for the system include municipal services and parking lot operators.

According to Ramon Pallàs, head of the UPC team that developed the technology (for which a patent is being sought), the plan is to make the information available on luminous panels on public thoroughfares. Users will also be able to receive parking information on mobile devices such as phones, laptop computers, and iPads.

The innovative features of the product (which the UPC's AntenaLAB group also worked on) relate to the field of sensors, the circuits connecting the sensors to the microcontroller, the method for supplying power to the sensors, and management of the power supply for the system as a whole.

Continuous operation with low power consumption

The invention overcomes the shortcomings of the best existing systems for detecting stationary vehicles. There currently exist devices that emit a signal when a car passes over a sensor, but they do not detect whether the vehicle stops. In an enclosed facility these systems can be used to count vehicles entering and leaving and thus determine the number of parking spaces available, but they do not indicate where the free spaces are. Also, the magnetic sensors now in use consume too much energy to be kept running all the time.

In contrast, the system developed by the UPC group and marketed by Urbiòtica operates continuously and uses very little power because the optical sensor is the only component that is always active and the magnetic sensor is activated less frequently than in other similar systems. The fact that the sensors are connected directly to the microcontroller, without any intermediate electronic circuit, also reduces power consumption.

Practical applications

The new system can be used to manage and monitor vehicles on public and private thoroughfares, particularly in urban areas. This makes it possible to monitor points of access to centers of population, restricted zones, security zones, and grade crossings, and to manage parking on streets, at airports, and in commercial and underground parking areas. These applications can reduce the time drivers spend looking for a parking spot, resulting in lower fuel consumption and less pollution.

The characteristics of the system also facilitate other applications, such as the reservation of parking spaces for disabled drivers and payment based on the real time that a parking space is used. The system could also be used to detect areas where lighting is absent or insufficient.

Once pilot testing has been successfully completed, the system will be installed in the 22@Barcelona innovation district (from December on) as part of a Barcelona City Council project to deploy sensor systems, and in the town of Figueres (early in 2011), where it will be used to monitor traffic entering and leaving the city center.


Source

Thursday, November 25, 2010

Short, on-Chip Light Pulses Will Enable Ultrafast Data Transfer Within Computers

Details appeared online in the journalNature Communicationson November 16.

This miniaturized short pulse generator eliminates a roadblock on the way to optical interconnects for use in PCs, data centers, imaging applications and beyond. These optical interconnects, which will aggregate slower data channels with pulse compression, will have far higher data rates and generate less heat than the copper wires they will replace. Such aggregation devices will be critical for future optical connections within and between high speed digital electronic processors in future digital information systems.

"Our pulse compressor is implemented on a chip, so we can easily integrate it with computer processors," said Dawn Tan, the Ph.D. candidate in the Department of Electrical and Computer Engineering at UC San Diego Jacobs School of Engineering who led development of the pulse compressor.

"Next generation computer networks and computer architectures will likely replace copper interconnects with their optical counterparts, and these have to be complementary metal oxide semiconductor (CMOS) compatible. This is why we created our pulse compressor on silicon," said Tan, an electrical engineering graduate student researcher at UC San Diego, and part of the National Science Foundation funded Center for Integrated Access Networks.

The pulse compressor will also provide a cost effective method to derive short pulses for a variety of imaging technologies such as time resolved spectroscopy -- which can be used to study lasers and electron behavior, and optical coherence tomography -- which can capture biological tissues in three dimensions.

In addition to increasing data transfer rates, switching from copper wires to optical interconnects will reduce power consumption caused by heat dissipation, switching and transmission of electrical signals.

"At UC San Diego, we recognized the enabling power of nanophotonics for integration of information systems close to 20 years ago when we first started to use nano-scale lithographic tools to create new optical functionalities of materials and devices -- and most importantly, to enable their integration with electronics on a chip. This Nature Communications paper demonstrates such integration of a few optical signal processing device functionalities on a CMOS compatible silicon-on-insulator material platform," said Yeshaiahu Fainman, a professor in the Department of Electrical and Computer Engineering in the UC San Diego Jacobs School of Engineering. Fainman acknowledged DARPA support in developing silicon photonics technologies which helped to enable this work, through programs such as Silicon-based Photonic Analog Signal Processing Engines with Reconfigurability (Si-PhASER) and Ultraperformance Nanophotonic Intrachip Communications (UNIC).

Pulse Compression for On-Chip Optical Interconnects

The compressed pulses are seven times shorter than the original -- the largest compression demonstrated to date on a chip.

Until now, pulse compression featuring such high compression factors was only possible using bulk optics or fiber-based systems, both of which are bulky and not practical for optical interconnects for computers and other electronics.

The combination of high compression and miniaturization are possible due to a nanoscale, light-guiding tool called an"integrated dispersive element" developed and designed primarily by electrical engineering Ph.D. candidate Dawn Tan.

The new dispersive element offers a much needed component to the on-chip nanophotonics tool kit.

The pulse compressor works in two steps. In step one, the spectrum of incoming laser light is broadened. For example, if green laser light were the input, the output would be red, green and blue laser light. In step two, the new integrated dispersive element developed by the electrical engineers manipulates the light so each spectrum in the pulse is travelling at the same speed. This speed synchronization is where pulse compression occurs.

Imagine the laser light as a series of cars. Looking down from above, the cars are initially in a long caravan. This is analogous to a long pulse of laser light. After stage one of pulse compression, the cars are no longer in a single line and they are moving at different speeds. Next, the cars move through the new dispersive grating where some cars are sped up and others are slowed down until each car is moving at the same speed. Viewed from above, the cars are all lined up and pass the finish line at the same moment.

This example illustrates how the on-chip pulse compressor transforms a long pulse of light into a spectrally broader and temporally shorter pulse of light. This temporally compressed pulse will enable multiplexing of data to achieve much higher data speeds.

"In communications, there is this technique called optical time division multiplexing or OTDM, where different signals are interleaved in time to produce a single data stream with higher data rates, on the order of terabytes per second. We've created a compression component that is essential for OTDM," said Tan.

The UC San Diego electrical engineers say they are the first to report a pulse compressor on a CMOS-compatible integrated platform that is strong enough for OTDM.

"In the future, this work will enable integrating multiple 'slow' bandwidth channels with pulse compression into a single ultra-high-bandwidth OTDM channel on a chip. Such aggregation devices will be critical for future inter- and intra-high speed digital electronic processors interconnections for numerous applications such as data centers, field-programmable gate arrays, high performance computing and more," said Fainman, holder of the Cymer Inc. Endowed Chair in Advanced Optical Technologies at the UC San Diego Jacobs School of Engineering and Deputy Director of the NSF-funded Center for Integrated Access Networks.

This work was supported by the Defense Advanced Research Projects Agency, the National Science Foundation (NSF) through Electrical, Communications and Cyber Systems (ECCS) grants, the NSF Center for Integrated Access Networks ERC, the Cymer Corporation and the U.S. Army Research Office.


Source

Wednesday, November 24, 2010

A New Electromagnetism Can Be Simulated Through a Quantum Simulator

There are two fundamental aspects that make these devices attractive for scientists. On the one hand, quantum simulators will play a leading role in clarifying some important, but yet unsolved, puzzles of theoretical physics.. On the other hand, such deeper understanding of a given phenomenon will certainly give rise to useful technological applications.

One of the best quantum simulators consists of a gas of extremely cold atoms loaded in an artificial crystal made of light: an optical lattice. Experimental physicists have developed efficient techniques to control the quantum properties of this system, to such extent, that it serves as an ideal quantum simulator of different phenomena.

So far, efforts have been focused on condensed-matter systems, where many open and interesting problems remain to be solved.

In a recent work published inPhysical Review Lettersby a collaboration of international teams (Universidad Complutense de Madrid: A. Bermudez and M.A. Martin-Delgado; ICFO Barcelona: M. Lewenstein; Max-Planck Institute Garching: L. Mazza, M. Rizzi; Universite de Brussels: N. Goldman), this platform has also been shown to be a potential quantum simulator of high-energy physics.

The authors have proposed a clean and controllable setup where a variety of exotic, but still unobserved, phenomena arise. They describe how to build a quantum simulator of Axion Electrodynamics (high-energy physics), and 3D Topological Insulators (condensed matter). In particular, these results pave the way to the fabrication of an Axion, a long sought-after missing particle in the standard model of elementary particles. They show that their atomic setup constitutes an axion medium, where an underlying topological order gives rise to a non-vanishing axion field.

Besides, they show how the value of the axion can attain arbitrary values, and how its dynamics and space-dependence can be experimentally controlled. Accordingly, their optical-lattice simulator offers a unique possibility to observe diverse effects, such as the Wiiten effect, the Wormhole effect, or a fractionally charged capacitor, in atomic-physics laboratories.

This work has an interdisciplinary character, which brings together physicists specializing in lattice gauge theories, atomic molecular and optical physics, and condensed matter physics.


Source

Tuesday, November 23, 2010

Software Allows Interactive Tabletop Displays on Web

Tabletop touch-operated displays are becoming popular with professionals in various fields, said Niklas Elmqvist, an assistant professor of electrical and computer engineering at Purdue University.

"These displays are like large iPhones, and because they are large they invite collaboration," he said."So we created a software framework that allows more than one display to connect and share the same space over the Internet."

Users are able to pan and zoom using finger-touch commands, said Elmqvist, who named the software Hugin after a raven in Norse mythology that provided the eyes of ears for the god Odin.

"Hugin was designed for touch screens but can be used with any visual display and input device, such as a mouse and keyboard," he said.

Tabletop displays commercially available are the size of a coffee table. The researchers created a unit about twice that size -- 58 inches by 37 inches -- for laboratory studies. They tested the software on 12 users in three groups of four on Purdue's main campus in West Lafayette, Ind., and at the University of Manitoba in Canada. The teams worked together to solve problems on tabletop systems.

Findings were detailed in a research paper presented earlier this month during the ACM International Conference on Interactive Tabletops and Surfaces 2010 in Saarbrücken, Germany.

The collaborative capability would aid professionals such as defense and stock market analysts and authorities managing emergency response to disasters. The program allows users to work together with"time-series charts," like the stock market index or similar graphics that change over time, said Elmqvist, who is working with doctoral student Waqas Javed and graduate student KyungTae Kim.

"This system could be run in a command center where you have people who have access to a tabletop," Elmqvist said."In future iterations it might allow integration of mobile devices connected to the tabletop so emergency responders can see on their small device whatever the people in the command center want them to see."

Participants have their own"territorial workspaces," where they may keep certain items hidden for privacy and practical purposes.

"Everyone only sees the things you send to a public domain on the display," Elmqvist said."This is partly for privacy but also because you don't want to overload everybody with everything you are working on."

The researchers are providing Hugin free to the public and expect to make the software available online in December.

"Other people will be able to use it as a platform to build their own thing on top of," he said."They will be able to download and contribute to it, customize it, add new visualizations."

The research paper was written by Kim, Javed and Elmqvist, all from Purdue's School of Electrical and Computer Engineering, and two researchers from the University of Manitoba: graduate student Cary Williams and Pourang Irani, a professor in the university's Department of Computer Science.

The researchers are working with the Pacific Northwest National Laboratory to develop technologies for command and control in emergency situations, such as first response to disasters.


Source

Physicists Demonstrate a Four-Fold Quantum Memory

Their work, described in the November 18 issue of the journalNature,also demonstrated a quantum interface between the atomic memories -- which represent something akin to a computer"hard drive" for entanglement -- and four beams of light, thereby enabling the four-fold entanglement to be distributed by photons across quantum networks. The research represents an important achievement in quantum information science by extending the coherent control of entanglement from two to multiple (four) spatially separated physical systems of matter and light.

The proof-of-principle experiment, led by William L. Valentine Professor and professor of physics H. Jeff Kimble, helps to pave the way toward quantum networks. Similar to the Internet in our daily life, a quantum network is a quantum"web" composed of many interconnected quantum nodes, each of which is capable of rudimentary quantum logic operations (similar to the"AND" and"OR" gates in computers) utilizing"quantum transistors" and of storing the resulting quantum states in quantum memories. The quantum nodes are"wired" together by quantum channels that carry, for example, beams of photons to deliver quantum information from node to node. Such an interconnected quantum system could function as a quantum computer, or, as proposed by the late Caltech physicist Richard Feynman in the 1980s, as a"quantum simulator" for studying complex problems in physics.

Quantum entanglement is a quintessential feature of the quantum realm and involves correlations among components of the overall physical system that cannot be described by classical physics. Strangely, for an entangled quantum system, there exists no objective physical reality for the system's properties. Instead, an entangled system contains simultaneously multiple possibilities for its properties. Such an entangled system has been created and stored by the Caltech researchers.

Previously, Kimble's group entangled a pair of atomic quantum memories and coherently transferred the entangled photons into and out of the quantum memories. For such two-component -- or bipartite -- entanglement, the subsystems are either entangled or not. But for multi-component entanglement with more than two subsystems -- or multipartite entanglement -- there are many possible ways to entangle the subsystems. For example, with four subsystems, all of the possible pair combinations could be bipartite entangled but not be entangled over all four components; alternatively, they could share a"global" quadripartite (four-part) entanglement.

Hence, multipartite entanglement is accompanied by increased complexity in the system. While this makes the creation and characterization of these quantum states substantially more difficult, it also makes the entangled states more valuable for tasks in quantum information science.

To achieve multipartite entanglement, the Caltech team used lasers to cool four collections (or ensembles) of about one million Cesium atoms, separated by 1 millimeter and trapped in a magnetic field, to within a few hundred millionths of a degree above absolute zero. Each ensemble can have atoms with internal spins that are"up" or"down" (analogous to spinning tops) and that are collectively described by a"spin wave" for the respective ensemble. It is these spin waves that the Caltech researchers succeeded in entangling among the four atomic ensembles.

The technique employed by the Caltech team for creating quadripartite entanglement is an extension of the theoretical work of Luming Duan, Mikhail Lukin, Ignacio Cirac, and Peter Zoller in 2001 for the generation of bipartite entanglement by the act of quantum measurement. This kind of"measurement-induced" entanglement for two atomic ensembles was first achieved by the Caltech group in 2005.

In the current experiment, entanglement was"stored" in the four atomic ensembles for a variable time, and then"read out" -- essentially, transferred -- to four beams of light. To do this, the researchers shot four"read" lasers into the four, now-entangled, ensembles. The coherent arrangement of excitation amplitudes for the atoms in the ensembles, described by spin waves, enhances the matter-light interaction through a phenomenon known as superradiant emission.

"The emitted light from each atom in an ensemble constructively interferes with the light from other atoms in the forward direction, allowing us to transfer the spin wave excitations of the ensembles to single photons," says Akihisa Goban, a Caltech graduate student and coauthor of the paper. The researchers were therefore able to coherently move the quantum information from the individual sets of multipartite entangled atoms to four entangled beams of light, forming the bridge between matter and light that is necessary for quantum networks.

The Caltech team investigated the dynamics by which the multipartite entanglement decayed while stored in the atomic memories."In the zoology of entangled states, our experiment illustrates how multipartite entangled spin waves can evolve into various subsets of the entangled systems over time, and sheds light on the intricacy and fragility of quantum entanglement in open quantum systems," says Caltech graduate student Kyung Soo Choi, the lead author of the Nature paper. The researchers suggest that the theoretical tools developed for their studies of the dynamics of entanglement decay could be applied for studying the entangled spin waves in quantum magnets.

Further possibilities of their experiment include the expansion of multipartite entanglement across quantum networks and quantum metrology."Our work introduces new sets of experimental capabilities to generate, store, and transfer multipartite entanglement from matter to light in quantum networks," Choi explains."It signifies the ever-increasing degree of exquisite quantum control to study and manipulate entangled states of matter and light."

In addition to Kimble, Choi, and Goban, the other authors of the paper are Scott Papp, a former postdoctoral scholar in the Caltech Center for the Physics of Information now at the National Institute of Standards and Technology in Boulder, Colorado, and Steven van Enk, a theoretical collaborator and professor of physics at the University of Oregon, and an associate of the Institute for Quantum Information at Caltech.

This research was funded by the National Science Foundation, the National Security Science and Engineering Faculty Fellowship program at the U.S. Department of Defense (DOD), the Northrop Grumman Corporation, and the Intelligence Advanced Research Projects Activity.


Source