Friday, May 20, 2011

Physicist Accelerates Simulations of Thin Film Growth

Jacques Amar, Ph.D., professor of physics at the University of Toledo (UT), studies the modeling and growth of materials at the atomic level. He uses Ohio Supercomputer Center (OSC) resources and Kinetic Monte Carlo (KMC) methods to simulate the molecular beam epitaxy (MBE) process, where metals are heated until they transition into a gaseous state and then reform as thin films by condensing on a wafer in single-crystal thick layers.

"One of the main advantages of MBE is the ability to control the deposition of thin films and atomic structures on the atomic scale in order to create nanostructures," explained Amar.

Thin films are used in industry to create a variety of products, such as semiconductors, optical coatings, pharmaceuticals and solar cells.

"Ohio's status as a worldwide manufacturing leader has led OSC to focus on the field of advanced materials as one of our areas of primary support," noted Ashok Krishnamurthy, co-interim co-executive director of the center."As a result, numerous respected physicists, chemists and engineers, such as Dr. Amar, have accessed OSC computation and storage resources to advance their vital materials science research."

Recently, Amar leveraged the center's powerful supercomputers to implement a"first-passage time approach" to speed up KMC simulations of the creation of materials just a few atoms thick.

"The KMC method has been successfully used to carry out simulations of a wide variety of dynamical processes over experimentally relevant time and length scales," Amar noted."However, in some cases, much of the simulation time can be 'wasted' on rapid, repetitive, low-barrier events."

While a variety of approaches to dealing with the inefficiencies have been suggested, Amar settled on using a first-passage-time (FPT) approach to improve KMC processing speeds. FPT, sometimes also called first-hitting-time, is a statistical model that sets a certain threshold for a process and then estimates certain factors, such as the probability that the process reaches that threshold within a certain amount time or the mean time until which the threshold is reached.

"In this approach, one avoids simulating the numerous diffusive hops of atoms, and instead replaces them with the first-passage time to make a transition from one location to another," Amar said.

In particular, Amar and colleagues from the UT department of Physics and Astronomy targeted two atomic-level events for testing the FPT approach: edge-diffusion and corner rounding. Edge-diffusion involves the"hopping" movement of surface atoms -- called adatoms -- along the edges of islands, which are formed as the material is growing. Corner rounding involves the hopping of adatoms around island corners, leading to smoother islands.

Amar compared the KMC-FPT and regular KMC simulation approaches using several different models of thin film growth: Cu/Cu(100), fcc(100) and solid-on-solid (SOS). Additionally, he employed two different methods for calculating the FPT for these events: the mean FPT (MFPT), as well as the full FPT distribution.

"Both methods provided"very good agreement" between the FPT-KMC approach and regular KMC simulations," Amar concluded."In addition, we find that our FPT approach can lead to a significant speed-up, compared to regular KMC simulations."

Amar's FPT-KMC approach accelerated simulations by a factor of approximately 63 to 100 times faster than the corresponding KMC simulations for the fcc(100) model. The SOS model was improved by a factor of 36 to 76 times faster. For the Cu/Cu(100) tests, speed-up factors of 31 to 42 and 22 to 28 times faster were achieved, respectively, for simulations using the full FPT distribution and MFPT calculations.

Amar's research was supported through multiple grants from the National Science Foundation, as well as by a grant of computer time from OSC.


Source

Thursday, May 19, 2011

Which Technologies Get Better Faster?

In a nutshell, the researchers found that the greater a technology's complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.

"It gives you a way to think about how the structure of the technology affects the rate of improvement," says Jessika Trancik, assistant professor of engineering systems at MIT. Trancik wrote the paper with James McNerney, a graduate student at Boston University (BU); Santa Fe Institute Professor Doyne Farmer; and BU physics professor Sid Redner. It appears online this week in theProceedings of the National Academy of Sciences.

The team was inspired by the complexity of energy-related technologies ranging from tiny transistors to huge coal-fired powerplants. They have tracked how these technologies improve over time, either through reduced cost or better performance, and, in this paper, develop a model to compare that progress to the complexity of the design and the degree of connectivity among its different components.

The authors say the approach they devised for comparing technologies could, for example, help policymakers mitigate climate change: By predicting which low-carbon technologies are likeliest to improve rapidly, their strategy could help identify the most effective areas to concentrate research funding. The analysis makes it possible to pick technologies"not just so they will work well today, but ones that will be subject to rapid development in the future," Trancik says.

Besides the importance of overall design complexity in slowing the rate of improvement, the researchers also found that certain patterns of interconnection can create bottlenecks, causing the pace of improvements to come in fits and starts rather than at a steady rate.

"In this paper, we develop a theory that shows why we see the rates of improvement that we see," Trancik says. Now that they have developed the theory, she and her colleagues are moving on to do empirical analysis of many different technologies to gauge how effective the model is in practice."We're doing a lot of work on analyzing large data sets" on different products and processes, she says.

For now, she suggests, the method is most useful for comparing two different technologies"whose components are similar, but whose design complexity is different." For example, the analysis could be used to compare different approaches to next-generation solar photovoltaic cells, she says. The method can also be applied to processes, such as improving the design of supply chains or infrastructure systems."It can be applied at many different scales," she says.

Koen Frenken, professor of economics of innovation and technological change at Eindhoven University of Technology in the Netherlands, says this paper"provides a long-awaited theory" for the well-known phenomenon of learning curves."It has remained a puzzle why the rates at which humans learn differ so markedly among technologies. This paper provides an explanation by looking at the complexity of technology, using a clever way to model design complexity."

Frenken adds,"The paper opens up new avenues for research. For example, one can verify their theory experimentally by having human subjects solve problems with different degrees of complexity." In addition, he says,"The implications for firms and policymakers {are} that R&D should not only be spent on invention of new technologies, but also on simplifying existing technologies so that humans will learn faster how to improve these technologies."

Ultimately, the kind of analysis developed in this paper could become part of the design process -- allowing engineers to"design for rapid innovation," Trancik says, by using these principles to determine"how you set up the architecture of your system."


Source

Wednesday, May 18, 2011

Imaging Technology Reveals Intricate Details of 49-Million-Year-Old Spider

University of Manchester researchers, working with colleagues in Germany, created the intricate images using X-ray computed tomography to study the remarkable spider, which can barely be seen under the microscope in the old and darkened amber.

Writing in the international journalNaturwissenschaften, the scientists showed that the amber fossil -- housed in the Berlin Natural History Museum -- is a member of a living genus of the Huntsman spiders (Sparassidae), a group of often large, active, free-living spiders that are hardly ever trapped in amber.

As well as documenting the oldest ever huntsman spider, especially through a short film revealing astounding details, the scientists showed that even specimens in historical pieces of amber, which at first look very bad, can yield vital data when studied by computed tomography.

"More than 1,000 species of fossil spider have been described, many of them from amber," said Dr David Penney, from Manchester's Faculty of Life Sciences."The best-known source is Baltic amber which is about 49 million years old, and which has been actively studied for over 150 years.

"Indeed, some of the first fossil spiders to be described back in 1854 were from the historically significant collection of Georg Karl Berendt, which is held in the Berlin Natural History museum. A problem here is that these old, historical amber pieces have reacted with oxygen over time and are now often dark or cracked, making it hard to see the animal specimens inside."

Berendt's amber specimens were supposed to include the oldest example of a so-called Huntsman spider but this seemed strange as huntsman spiders are strong, quick animals that would be unlikely to be trapped in tree resin. To test this, an international team of experts in the fields of fossils and living spiders, and in modern techniques of computer analysis decided to re-study Georg Berendt's original specimen and determine once and for all what it really was.

"The results were surprising," said Dr Penney."Computed tomography produced 3D images and movies of astounding quality, which allowed us to compare the finest details of the amber fossil with similar-looking living spiders.

"We were able to show that the fossil is unquestionably a Huntsman spider and belongs to a genus calledEusparassus, which lives in the tropics and also arid regions of southern Europe today, but evidently lived in central Europe 50 million years ago.

"The research is particularly exciting because our results show that this method works and that other scientifically important specimens in historical pieces of darkened amber can be investigated and compared to their living relatives in the same way."

Professor Philip Withers, who established the Henry Moseley X-ray Imaging Facility -- a unique suite of 3D X-ray imagers covering scales from a metre to 50nm -- within Manchester's School of Materials, added:"Normally such fossils are really hard to detect because the contrast against the amber is low but with phase contrast imaging the spiders really jump out at you in 3D. Usually you have to go to a synchrotron X-ray facility to get good phase contrast, but we can get excellent phase contrast in the lab. This is really exciting because it opens up the embedded fossil archive not just in ambers."


Source

Monday, May 16, 2011

Beyond Smart Phones: Sensor Network to Make 'Smart Cities' Envisioned

Computer scientists, electrical and computer engineers, and mathemati­cians at the TU Darmstadt and the University of Kassel have joined forces and are working on implementing that vision under their"Cocoon" project. The backbone of a"smart" city is a communications network consisting of sen­sors that receive streams of data, or signals, analyze them, and trans­mit them onward. Such sensors thus act as both receivers and trans­mit­ters, i.e., represent trans­ceivers. The networked communications involved oper­ates wire­lessly via radio links, and yields added values to all partici­pants by analyzing the input data involved. For example, the"Smart Home" control system already on the market allows networking all sorts of devices and automatically regulating them to suit demands, thereby alleg­edly yielding energy savings of as much as fifteen percent.

"Smart Home" might soon be followed by"Smart Hospital,""Smart Indus­try," or"Smart Farm," and even"smart" systems tailored to suit mobile net­works are feasible. Traffic jams may be avoided by, for example, car-to-car or car-to-environment (car-to-X) communications. Health-service sys­tems might also benefit from mobile, sensor communications whenever patients need to be kept supplied with information tailored to suit their health­care needs while underway. Furthermore, sensors on their bodies could assess the status of their health and automatically transmit calls for emergency medical assistance, whenever necessary.

"Smart" and mobile, thanks to beam forming

The researchers regard the ceaseless travels of sensors on mobile systems and their frequent entries into/exits from instrumented areas as the major hurdle to be overcome in implementing their vision of"smart" cities. Sensor-aided devices will have to deal with that by responding to subtle changes in their environments and flexibly, efficiently, regulating the quali­ties of received and transmitted signals. Beam forming, a field in which the TU Darmstadt's Institute for Communications Technology is active, should help out there. On that subject, Prof. Rolf Jakoby of the TU Darmstadt's Electrical Engineering and Information Technology Dept. remarked that,"Current types of antennae radiate omnidirectionally, like light bulbs. We intend to create conditions, under which antennae will, in the future, behave like spotlights that, once they have located a sought device, will track it, while suppressing interference by stray electromag­netic radiation from other devices that might also be present in the area."

Such antennae, along with transceivers equipped with them, are thus recon­figurable, i.e., adjustable to suit ambient conditions by means of onboard electronic circuitry or remote controls. Working in col­lab­or­a­tion with an industrial partner, Jakoby has already equipped terres­trial digital-television (TDTV) transmitters with reconfigurable amplifiers that allow amplifying transmitted-signal levels by as much as ten percent. He added that,"If all of Germany's TDTV‑transmitters were equipped with such amp­li­fiers, we could shut down one nuclear power plant."

Frequency bands are a scarce resource

Reconfigurable devices also make much more efficient use of a scarce resource, freq­uency bands. Users have thus far been allocated rigorously defined frequency bands, where only fifteen to twenty percent of the capacities of even the more popular ones have been allocated. Beam forming might allow making more efficient use of them. Jakoby noted that,"This is an area that we are still taking a close look at, but we are well along the way toward understand­ing the system better." However, only a few uses of beam forming have emerged to date, since currently available systems are too expensive for mass applications.

Small, model networks are targeted

Yet another fundamental problem remains to be solved before"smart" cities may become realities. Sensor communications requires the cooper­a­tion of all devices involved, across all communications protocols, such as"Bluetooth," and across all networks, such as the European Global System for Mobile Communications (GSM) mobile-telephone network or wireless local-area networks (WLAN), which cannot be achieved with current devices, communications protocols, and networks. Jakoby explained that,"Con­verting all devices to a common communications protocol is infeas­ible, which is why we are seeking a new protocol that would be superim­posed upon everything and allow them to communicate via several proto­cols." Transmission channels would also have to be capable of handling a mas­sive flood of data, since, as Prof. Abdelhak Zoubir of the TU Darm­stadt's Electrical Engineer­ing and Information Technology Dept., the"Cocoon" project's coordinator, put it,"A"smart" Darm­stadt alone would surely involve a million sensors communicating with one another via satel­lites, mobile telephones, computers, and all of the other types of devices that we already have available. Furthermore, since a single, mobile sensor is readily capable of generating several hundred Meg­a­bytes of data annu­ally, new models for handling the communications of millions of such sen­sors that will more densely compress data in order to provide for error-free com­munica­tions will be needed. Several hurdles will thus have to be over­come before"smart" cities become reality. Nevertheless, the scientists working on the"Cocoon" project are convinced that they will be able to simulate a"smart" city incorporating various types of devices employing early versions of small, model networks.

Over the next three years, scientists at the TU Darmstadt will be receiving a total of 4.5 million Euros from the State of Hesse's Offensive for Devel­op­ing Scientific-Economic Excellence for their researches in conjunction with their"Cocoon -- Cooperative Sensor Communications" project.


Source

Sunday, May 15, 2011

Razing Seattle's Viaduct Doesn’t Guarantee Nightmare Commutes, Model Says

University of Washington statisticians have, for the first time, explored a different subject of uncertainty, namely surrounding how much commuters might benefit from the project. They found that relying on surface streets would likely have less impact on travel times than previously reported, and that different options' effects on commute times are not well known.

The research, conducted in 2009, was originally intended as an academic exercise looking at how to assess uncertainties in travel-time projections from urban transportation and land-use models. But the paper is being published amid renewed debate about the future of Seattle's waterfront thoroughfare.

"In early 2009 it was decided there would be a tunnel, and we said, 'Well, the issue is settled but it's still of academic interest,'" said co-author Adrian Raftery, a UW statistics professor."Now it has all bubbled up again."

The study was cited last month in a report by the Seattle Department of Transportation reviewing the tunnel's impact. It is now available online, and will be published in an upcoming issue of the journalTransportation Research: Part A.

The UW authors considered 22 commuter routes, eight of which currently include the viaduct. They compared a business-as-usual scenario, where a new elevated highway or a tunnel carries all existing traffic, against a worst-case scenario in which the viaduct is removed and no measures are taken to increase public transportation or otherwise mitigate the effects.

The study found that simply erasing the structure in 2010 would increase travel times a decade later for the eight routes that currently include the viaduct by 1.5 minutes to 9.2 minutes, with an average increase of 6 minutes. The uncertainty was fairly large, with zero change within the 95 percent confidence range for all the viaduct routes, and more than 20 minutes increase as a reasonable projection in a few cases. In the short term some routes along Interstate 5 were slightly slower, but by 2020 the travel times returned to today's levels.

"This indicates that over time removing the structure would increase commute times for people who use the viaduct by about six minutes, although there's quite a bit of uncertainty about exactly how much," Raftery said."In the rest of the region, on I-5, there's no indication that it would increase commute times at all."

The Washington State Department of Transportation had used a computer model in 2008 to explore travel times under various project scenarios. It found that the peak morning commute across downtown would be 10 minutes longer if the state relied on surface transportation. Shortly thereafter state and city leaders decided to build a tunnel.

The UW team in late 2009 ran the same travel model but added an urban land-use component that allows people and businesses to adapt over time -- for instance by moving, switching jobs or relocating businesses. It also included a statistical method that puts error bars around the travel-time projections.

"There is a big interest among transportation planners in putting an uncertainty range around modeling results," said co-author Hana Sevcikova, a UW research scientist who ran the model.

"Often in policy discussions there's interest in either one end or the other of an interval: How bad could things be if we don't make an investment, or if we do make an investment, are we sure that it's necessary?" Raftery said."The ends of the interval can give you a sense of that."

The UW study used a method called Bayesian statistics to combine computer models with actual data. Researchers used 2000 and 2005 land-use data and 2005 commute travel times to fine-tune the model. Bayesian statistics improves the model's accuracy and provides an uncertainty range around the model's projections.

The study used UrbanSim, an urban simulation model developed by co-author and former UW faculty member Paul Waddell, now a professor at the University of California, Berkeley. The model starts running in the year 2000, the viaduct is taken down in 2010 and the study focuses on peak morning commutes in the year 2020.

Despite renewed discussion, the authors are not taking a position on the debate.

"This is a scientific assessment. People could well say that six minutes is a lot, and it's worth whatever it takes {to avoid it}," Raftery said."To some extent it comes down to a value judgment, factoring in the economic and environmental impacts."


Source

Saturday, May 14, 2011

Toward Faster Transistors: Physicists Discover Physical Phenomenon That Could Boost Computers' Clock Speed

In this week's issue of the journalScience,MIT researchers and their colleagues at the University of Augsburg in Germany report the discovery of a new physical phenomenon that could yield transistors with greatly enhanced capacitance -- a measure of the voltage required to move a charge. And that, in turn, could lead to the revival of clock speed as the measure of a computer's power.

In today's computer chips, transistors are made from semiconductors, such as silicon. Each transistor includes an electrode called the gate; applying a voltage to the gate causes electrons to accumulate underneath it. The electrons constitute a channel through which an electrical current can pass, turning the semiconductor into a conductor.

Capacitance measures how much charge accumulates below the gate for a given voltage. The power that a chip consumes, and the heat it gives off, are roughly proportional to the square of the gate's operating voltage. So lowering the voltage could drastically reduce the heat, creating new room to crank up the clock.

MIT Professor of Physics Raymond Ashoori and Lu Li, a postdoc and Pappalardo Fellow in his lab -- together with Christoph Richter, Stefan Paetel, Thilo Kopp and Jochen Mannhart of the University of Augsburg -- investigated the unusual physical system that results when lanthanum aluminate is grown on top of strontium titanate. Lanthanum aluminate consists of alternating layers of lanthanum oxide and aluminum oxide. The lanthanum-based layers have a slight positive charge; the aluminum-based layers, a slight negative charge. The result is a series of electric fields that all add up in the same direction, creating an electric potential between the top and bottom of the material.

Ordinarily, both lanthanum aluminate and strontium titanate are excellent insulators, meaning that they don't conduct electrical current. But physicists had speculated that if the lanthanum aluminate gets thick enough, its electrical potential would increase to the point that some electrons would have to move from the top of the material to the bottom, to prevent what's called a"polarization catastrophe." The result is a conductive channel at the juncture with the strontium titanate -- much like the one that forms when a transistor is switched on. So Ashoori and his collaborators decided to measure the capacitance between that channel and a gate electrode on top of the lanthanum aluminate.

They were amazed by what they found: Although their results were somewhat limited by their experimental apparatus, it may be that an infinitesimal change in voltage will cause a large amount of charge to enter the channel between the two materials."The channel may suck in charge -- shoomp! Like a vacuum," Ashoori says."And it operates at room temperature, which is the thing that really stunned us."

Indeed, the material's capacitance is so high that the researchers don't believe it can be explained by existing physics."We've seen the same kind of thing in semiconductors," Ashoori says,"but that was a very pure sample, and the effect was very small. This is a super-dirty sample and a super-big effect." It's still not clear, Ashoori says, just why the effect is so big:"It could be a new quantum-mechanical effect or some unknown physics of the material."

There is one drawback to the system that the researchers investigated: While a lot of charge will move into the channel between materials with a slight change in voltage, it moves slowly -- much too slowly for the type of high-frequency switching that takes place in computer chips. That could be because the samples of the material are, as Ashoori says,"super dirty"; purer samples might exhibit less electrical resistance. But it's also possible that, if researchers can understand the physical phenomena underlying the material's remarkable capacitance, they may be able to reproduce them in more practical materials.

Triscone cautions that wholesale changes to the way computer chips are manufactured will inevitably face resistance."So much money has been injected into the semiconductor industry for decades that to do something new, you need a really disruptive technology," he says.

"It's not going to revolutionize electronics tomorrow," Ashoori agrees."But this mechanism exists, and once we know it exists, if we can understand what it is, we can try to engineer it."


Source

Friday, May 13, 2011

'Surrogates' Aid Design of Complex Parts and Controlling Video Games

The new interactive approach is being used commercially and in research but until now has not been formally defined, and doing so could boost its development and number of applications, said Ji Soo Yi, an assistant professor of industrial engineering at Purdue University.

Conventional computer-aided design programs often rely on the use of numerous menus containing hundreds of selection options. The surrogate interaction uses a drawing that resembles the real object to provide users a more intuitive interface than menus.

The Purdue researchers have investigated the characteristics of surrogate interaction, explored potential ways to use it in design applications, developed software to test those uses and suggested the future directions of the research.

Surrogates are interactive graphical representations of real objects, such as a car or a video game character, with icons on the side labeling specific parts of the figure, said Niklas Elmqvist, a Purdue assistant professor of electrical and computer engineering.

"If you click on one label, you change color, if you drag a border you change its width. Anything you do to the surrogate affects the actual objects you are working with," he said."The way it is now, say I'm working on a car design and wanted to move the rear wheels slightly forward, or I want to change an object's color or thickness of specific parts. I can't make those changes to the drawing directly but have to search in menus and use arcane commands."

Several techniques have been developed over the years to address these issues.

"But they are all isolated and limited efforts with no coherent underlying principle," Elmqvist said."We propose the notion of surrogate interaction to unify other techniques that have been developed. We believe that formalizing this family of interaction techniques will provide an additional and powerful interface design alternative, as well as uncover opportunities for future research."

The approach also allows video gamers to change attributes of animated characters.

"For computer games, especially role playing games, you may have a warrior character that has lots of different armor and equipment," Elmqvist said."Usually you can't interact with the character itself. If you want to put in a new cloak or a sword you have to use this complex system of menus."

Research findings are detailed in a paper presented during the Association for Computing Machinery's CHI Conference on Human Factors in Computing Systems through May 12 in Vancouver, British Columbia. The research paper was written by industrial engineering doctoral student Bum chul Kwon, electrical and computer engineering doctoral student Waqas Javed, Elmqvist and Yi.

Kwon and Yi helped theorize the idea of surrogate interaction with relation to previous models of interaction.

The method also makes it possible to manipulate more than one object simultaneously.

"In computer strategy games you might be moving an army or maybe five infantry soldiers, and you want to take a building," Elmqvist said."Using our technique you would let a surrogate, one soldier, represent all of the soldiers. Any commands you issue for the surrogate applies to all five soldiers."

Current video game technology lacks an easy-to-use method to issue such simultaneous commands to all members of a group.

The method also could be used to make maps interactive.

"In maps, usually you have a legend that says this color means forest and this symbol means railroad tracks and so on," Elmqvist said."You can see these symbols in the map, but you can't interact with them. In the new approach, you have a surrogate of the map, and in this surrogate you can interact with these legends. For example, you could search for interstate highways, bridges, public parks."


Source