Friday, May 20, 2011

Physicist Accelerates Simulations of Thin Film Growth

Jacques Amar, Ph.D., professor of physics at the University of Toledo (UT), studies the modeling and growth of materials at the atomic level. He uses Ohio Supercomputer Center (OSC) resources and Kinetic Monte Carlo (KMC) methods to simulate the molecular beam epitaxy (MBE) process, where metals are heated until they transition into a gaseous state and then reform as thin films by condensing on a wafer in single-crystal thick layers.

"One of the main advantages of MBE is the ability to control the deposition of thin films and atomic structures on the atomic scale in order to create nanostructures," explained Amar.

Thin films are used in industry to create a variety of products, such as semiconductors, optical coatings, pharmaceuticals and solar cells.

"Ohio's status as a worldwide manufacturing leader has led OSC to focus on the field of advanced materials as one of our areas of primary support," noted Ashok Krishnamurthy, co-interim co-executive director of the center."As a result, numerous respected physicists, chemists and engineers, such as Dr. Amar, have accessed OSC computation and storage resources to advance their vital materials science research."

Recently, Amar leveraged the center's powerful supercomputers to implement a"first-passage time approach" to speed up KMC simulations of the creation of materials just a few atoms thick.

"The KMC method has been successfully used to carry out simulations of a wide variety of dynamical processes over experimentally relevant time and length scales," Amar noted."However, in some cases, much of the simulation time can be 'wasted' on rapid, repetitive, low-barrier events."

While a variety of approaches to dealing with the inefficiencies have been suggested, Amar settled on using a first-passage-time (FPT) approach to improve KMC processing speeds. FPT, sometimes also called first-hitting-time, is a statistical model that sets a certain threshold for a process and then estimates certain factors, such as the probability that the process reaches that threshold within a certain amount time or the mean time until which the threshold is reached.

"In this approach, one avoids simulating the numerous diffusive hops of atoms, and instead replaces them with the first-passage time to make a transition from one location to another," Amar said.

In particular, Amar and colleagues from the UT department of Physics and Astronomy targeted two atomic-level events for testing the FPT approach: edge-diffusion and corner rounding. Edge-diffusion involves the"hopping" movement of surface atoms -- called adatoms -- along the edges of islands, which are formed as the material is growing. Corner rounding involves the hopping of adatoms around island corners, leading to smoother islands.

Amar compared the KMC-FPT and regular KMC simulation approaches using several different models of thin film growth: Cu/Cu(100), fcc(100) and solid-on-solid (SOS). Additionally, he employed two different methods for calculating the FPT for these events: the mean FPT (MFPT), as well as the full FPT distribution.

"Both methods provided"very good agreement" between the FPT-KMC approach and regular KMC simulations," Amar concluded."In addition, we find that our FPT approach can lead to a significant speed-up, compared to regular KMC simulations."

Amar's FPT-KMC approach accelerated simulations by a factor of approximately 63 to 100 times faster than the corresponding KMC simulations for the fcc(100) model. The SOS model was improved by a factor of 36 to 76 times faster. For the Cu/Cu(100) tests, speed-up factors of 31 to 42 and 22 to 28 times faster were achieved, respectively, for simulations using the full FPT distribution and MFPT calculations.

Amar's research was supported through multiple grants from the National Science Foundation, as well as by a grant of computer time from OSC.


Source

Thursday, May 19, 2011

Which Technologies Get Better Faster?

In a nutshell, the researchers found that the greater a technology's complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.

"It gives you a way to think about how the structure of the technology affects the rate of improvement," says Jessika Trancik, assistant professor of engineering systems at MIT. Trancik wrote the paper with James McNerney, a graduate student at Boston University (BU); Santa Fe Institute Professor Doyne Farmer; and BU physics professor Sid Redner. It appears online this week in theProceedings of the National Academy of Sciences.

The team was inspired by the complexity of energy-related technologies ranging from tiny transistors to huge coal-fired powerplants. They have tracked how these technologies improve over time, either through reduced cost or better performance, and, in this paper, develop a model to compare that progress to the complexity of the design and the degree of connectivity among its different components.

The authors say the approach they devised for comparing technologies could, for example, help policymakers mitigate climate change: By predicting which low-carbon technologies are likeliest to improve rapidly, their strategy could help identify the most effective areas to concentrate research funding. The analysis makes it possible to pick technologies"not just so they will work well today, but ones that will be subject to rapid development in the future," Trancik says.

Besides the importance of overall design complexity in slowing the rate of improvement, the researchers also found that certain patterns of interconnection can create bottlenecks, causing the pace of improvements to come in fits and starts rather than at a steady rate.

"In this paper, we develop a theory that shows why we see the rates of improvement that we see," Trancik says. Now that they have developed the theory, she and her colleagues are moving on to do empirical analysis of many different technologies to gauge how effective the model is in practice."We're doing a lot of work on analyzing large data sets" on different products and processes, she says.

For now, she suggests, the method is most useful for comparing two different technologies"whose components are similar, but whose design complexity is different." For example, the analysis could be used to compare different approaches to next-generation solar photovoltaic cells, she says. The method can also be applied to processes, such as improving the design of supply chains or infrastructure systems."It can be applied at many different scales," she says.

Koen Frenken, professor of economics of innovation and technological change at Eindhoven University of Technology in the Netherlands, says this paper"provides a long-awaited theory" for the well-known phenomenon of learning curves."It has remained a puzzle why the rates at which humans learn differ so markedly among technologies. This paper provides an explanation by looking at the complexity of technology, using a clever way to model design complexity."

Frenken adds,"The paper opens up new avenues for research. For example, one can verify their theory experimentally by having human subjects solve problems with different degrees of complexity." In addition, he says,"The implications for firms and policymakers {are} that R&D should not only be spent on invention of new technologies, but also on simplifying existing technologies so that humans will learn faster how to improve these technologies."

Ultimately, the kind of analysis developed in this paper could become part of the design process -- allowing engineers to"design for rapid innovation," Trancik says, by using these principles to determine"how you set up the architecture of your system."


Source

Wednesday, May 18, 2011

Imaging Technology Reveals Intricate Details of 49-Million-Year-Old Spider

University of Manchester researchers, working with colleagues in Germany, created the intricate images using X-ray computed tomography to study the remarkable spider, which can barely be seen under the microscope in the old and darkened amber.

Writing in the international journalNaturwissenschaften, the scientists showed that the amber fossil -- housed in the Berlin Natural History Museum -- is a member of a living genus of the Huntsman spiders (Sparassidae), a group of often large, active, free-living spiders that are hardly ever trapped in amber.

As well as documenting the oldest ever huntsman spider, especially through a short film revealing astounding details, the scientists showed that even specimens in historical pieces of amber, which at first look very bad, can yield vital data when studied by computed tomography.

"More than 1,000 species of fossil spider have been described, many of them from amber," said Dr David Penney, from Manchester's Faculty of Life Sciences."The best-known source is Baltic amber which is about 49 million years old, and which has been actively studied for over 150 years.

"Indeed, some of the first fossil spiders to be described back in 1854 were from the historically significant collection of Georg Karl Berendt, which is held in the Berlin Natural History museum. A problem here is that these old, historical amber pieces have reacted with oxygen over time and are now often dark or cracked, making it hard to see the animal specimens inside."

Berendt's amber specimens were supposed to include the oldest example of a so-called Huntsman spider but this seemed strange as huntsman spiders are strong, quick animals that would be unlikely to be trapped in tree resin. To test this, an international team of experts in the fields of fossils and living spiders, and in modern techniques of computer analysis decided to re-study Georg Berendt's original specimen and determine once and for all what it really was.

"The results were surprising," said Dr Penney."Computed tomography produced 3D images and movies of astounding quality, which allowed us to compare the finest details of the amber fossil with similar-looking living spiders.

"We were able to show that the fossil is unquestionably a Huntsman spider and belongs to a genus calledEusparassus, which lives in the tropics and also arid regions of southern Europe today, but evidently lived in central Europe 50 million years ago.

"The research is particularly exciting because our results show that this method works and that other scientifically important specimens in historical pieces of darkened amber can be investigated and compared to their living relatives in the same way."

Professor Philip Withers, who established the Henry Moseley X-ray Imaging Facility -- a unique suite of 3D X-ray imagers covering scales from a metre to 50nm -- within Manchester's School of Materials, added:"Normally such fossils are really hard to detect because the contrast against the amber is low but with phase contrast imaging the spiders really jump out at you in 3D. Usually you have to go to a synchrotron X-ray facility to get good phase contrast, but we can get excellent phase contrast in the lab. This is really exciting because it opens up the embedded fossil archive not just in ambers."


Source

Monday, May 16, 2011

Beyond Smart Phones: Sensor Network to Make 'Smart Cities' Envisioned

Computer scientists, electrical and computer engineers, and mathemati­cians at the TU Darmstadt and the University of Kassel have joined forces and are working on implementing that vision under their"Cocoon" project. The backbone of a"smart" city is a communications network consisting of sen­sors that receive streams of data, or signals, analyze them, and trans­mit them onward. Such sensors thus act as both receivers and trans­mit­ters, i.e., represent trans­ceivers. The networked communications involved oper­ates wire­lessly via radio links, and yields added values to all partici­pants by analyzing the input data involved. For example, the"Smart Home" control system already on the market allows networking all sorts of devices and automatically regulating them to suit demands, thereby alleg­edly yielding energy savings of as much as fifteen percent.

"Smart Home" might soon be followed by"Smart Hospital,""Smart Indus­try," or"Smart Farm," and even"smart" systems tailored to suit mobile net­works are feasible. Traffic jams may be avoided by, for example, car-to-car or car-to-environment (car-to-X) communications. Health-service sys­tems might also benefit from mobile, sensor communications whenever patients need to be kept supplied with information tailored to suit their health­care needs while underway. Furthermore, sensors on their bodies could assess the status of their health and automatically transmit calls for emergency medical assistance, whenever necessary.

"Smart" and mobile, thanks to beam forming

The researchers regard the ceaseless travels of sensors on mobile systems and their frequent entries into/exits from instrumented areas as the major hurdle to be overcome in implementing their vision of"smart" cities. Sensor-aided devices will have to deal with that by responding to subtle changes in their environments and flexibly, efficiently, regulating the quali­ties of received and transmitted signals. Beam forming, a field in which the TU Darmstadt's Institute for Communications Technology is active, should help out there. On that subject, Prof. Rolf Jakoby of the TU Darmstadt's Electrical Engineering and Information Technology Dept. remarked that,"Current types of antennae radiate omnidirectionally, like light bulbs. We intend to create conditions, under which antennae will, in the future, behave like spotlights that, once they have located a sought device, will track it, while suppressing interference by stray electromag­netic radiation from other devices that might also be present in the area."

Such antennae, along with transceivers equipped with them, are thus recon­figurable, i.e., adjustable to suit ambient conditions by means of onboard electronic circuitry or remote controls. Working in col­lab­or­a­tion with an industrial partner, Jakoby has already equipped terres­trial digital-television (TDTV) transmitters with reconfigurable amplifiers that allow amplifying transmitted-signal levels by as much as ten percent. He added that,"If all of Germany's TDTV‑transmitters were equipped with such amp­li­fiers, we could shut down one nuclear power plant."

Frequency bands are a scarce resource

Reconfigurable devices also make much more efficient use of a scarce resource, freq­uency bands. Users have thus far been allocated rigorously defined frequency bands, where only fifteen to twenty percent of the capacities of even the more popular ones have been allocated. Beam forming might allow making more efficient use of them. Jakoby noted that,"This is an area that we are still taking a close look at, but we are well along the way toward understand­ing the system better." However, only a few uses of beam forming have emerged to date, since currently available systems are too expensive for mass applications.

Small, model networks are targeted

Yet another fundamental problem remains to be solved before"smart" cities may become realities. Sensor communications requires the cooper­a­tion of all devices involved, across all communications protocols, such as"Bluetooth," and across all networks, such as the European Global System for Mobile Communications (GSM) mobile-telephone network or wireless local-area networks (WLAN), which cannot be achieved with current devices, communications protocols, and networks. Jakoby explained that,"Con­verting all devices to a common communications protocol is infeas­ible, which is why we are seeking a new protocol that would be superim­posed upon everything and allow them to communicate via several proto­cols." Transmission channels would also have to be capable of handling a mas­sive flood of data, since, as Prof. Abdelhak Zoubir of the TU Darm­stadt's Electrical Engineer­ing and Information Technology Dept., the"Cocoon" project's coordinator, put it,"A"smart" Darm­stadt alone would surely involve a million sensors communicating with one another via satel­lites, mobile telephones, computers, and all of the other types of devices that we already have available. Furthermore, since a single, mobile sensor is readily capable of generating several hundred Meg­a­bytes of data annu­ally, new models for handling the communications of millions of such sen­sors that will more densely compress data in order to provide for error-free com­munica­tions will be needed. Several hurdles will thus have to be over­come before"smart" cities become reality. Nevertheless, the scientists working on the"Cocoon" project are convinced that they will be able to simulate a"smart" city incorporating various types of devices employing early versions of small, model networks.

Over the next three years, scientists at the TU Darmstadt will be receiving a total of 4.5 million Euros from the State of Hesse's Offensive for Devel­op­ing Scientific-Economic Excellence for their researches in conjunction with their"Cocoon -- Cooperative Sensor Communications" project.


Source

Sunday, May 15, 2011

Razing Seattle's Viaduct Doesn’t Guarantee Nightmare Commutes, Model Says

University of Washington statisticians have, for the first time, explored a different subject of uncertainty, namely surrounding how much commuters might benefit from the project. They found that relying on surface streets would likely have less impact on travel times than previously reported, and that different options' effects on commute times are not well known.

The research, conducted in 2009, was originally intended as an academic exercise looking at how to assess uncertainties in travel-time projections from urban transportation and land-use models. But the paper is being published amid renewed debate about the future of Seattle's waterfront thoroughfare.

"In early 2009 it was decided there would be a tunnel, and we said, 'Well, the issue is settled but it's still of academic interest,'" said co-author Adrian Raftery, a UW statistics professor."Now it has all bubbled up again."

The study was cited last month in a report by the Seattle Department of Transportation reviewing the tunnel's impact. It is now available online, and will be published in an upcoming issue of the journalTransportation Research: Part A.

The UW authors considered 22 commuter routes, eight of which currently include the viaduct. They compared a business-as-usual scenario, where a new elevated highway or a tunnel carries all existing traffic, against a worst-case scenario in which the viaduct is removed and no measures are taken to increase public transportation or otherwise mitigate the effects.

The study found that simply erasing the structure in 2010 would increase travel times a decade later for the eight routes that currently include the viaduct by 1.5 minutes to 9.2 minutes, with an average increase of 6 minutes. The uncertainty was fairly large, with zero change within the 95 percent confidence range for all the viaduct routes, and more than 20 minutes increase as a reasonable projection in a few cases. In the short term some routes along Interstate 5 were slightly slower, but by 2020 the travel times returned to today's levels.

"This indicates that over time removing the structure would increase commute times for people who use the viaduct by about six minutes, although there's quite a bit of uncertainty about exactly how much," Raftery said."In the rest of the region, on I-5, there's no indication that it would increase commute times at all."

The Washington State Department of Transportation had used a computer model in 2008 to explore travel times under various project scenarios. It found that the peak morning commute across downtown would be 10 minutes longer if the state relied on surface transportation. Shortly thereafter state and city leaders decided to build a tunnel.

The UW team in late 2009 ran the same travel model but added an urban land-use component that allows people and businesses to adapt over time -- for instance by moving, switching jobs or relocating businesses. It also included a statistical method that puts error bars around the travel-time projections.

"There is a big interest among transportation planners in putting an uncertainty range around modeling results," said co-author Hana Sevcikova, a UW research scientist who ran the model.

"Often in policy discussions there's interest in either one end or the other of an interval: How bad could things be if we don't make an investment, or if we do make an investment, are we sure that it's necessary?" Raftery said."The ends of the interval can give you a sense of that."

The UW study used a method called Bayesian statistics to combine computer models with actual data. Researchers used 2000 and 2005 land-use data and 2005 commute travel times to fine-tune the model. Bayesian statistics improves the model's accuracy and provides an uncertainty range around the model's projections.

The study used UrbanSim, an urban simulation model developed by co-author and former UW faculty member Paul Waddell, now a professor at the University of California, Berkeley. The model starts running in the year 2000, the viaduct is taken down in 2010 and the study focuses on peak morning commutes in the year 2020.

Despite renewed discussion, the authors are not taking a position on the debate.

"This is a scientific assessment. People could well say that six minutes is a lot, and it's worth whatever it takes {to avoid it}," Raftery said."To some extent it comes down to a value judgment, factoring in the economic and environmental impacts."


Source

Saturday, May 14, 2011

Toward Faster Transistors: Physicists Discover Physical Phenomenon That Could Boost Computers' Clock Speed

In this week's issue of the journalScience,MIT researchers and their colleagues at the University of Augsburg in Germany report the discovery of a new physical phenomenon that could yield transistors with greatly enhanced capacitance -- a measure of the voltage required to move a charge. And that, in turn, could lead to the revival of clock speed as the measure of a computer's power.

In today's computer chips, transistors are made from semiconductors, such as silicon. Each transistor includes an electrode called the gate; applying a voltage to the gate causes electrons to accumulate underneath it. The electrons constitute a channel through which an electrical current can pass, turning the semiconductor into a conductor.

Capacitance measures how much charge accumulates below the gate for a given voltage. The power that a chip consumes, and the heat it gives off, are roughly proportional to the square of the gate's operating voltage. So lowering the voltage could drastically reduce the heat, creating new room to crank up the clock.

MIT Professor of Physics Raymond Ashoori and Lu Li, a postdoc and Pappalardo Fellow in his lab -- together with Christoph Richter, Stefan Paetel, Thilo Kopp and Jochen Mannhart of the University of Augsburg -- investigated the unusual physical system that results when lanthanum aluminate is grown on top of strontium titanate. Lanthanum aluminate consists of alternating layers of lanthanum oxide and aluminum oxide. The lanthanum-based layers have a slight positive charge; the aluminum-based layers, a slight negative charge. The result is a series of electric fields that all add up in the same direction, creating an electric potential between the top and bottom of the material.

Ordinarily, both lanthanum aluminate and strontium titanate are excellent insulators, meaning that they don't conduct electrical current. But physicists had speculated that if the lanthanum aluminate gets thick enough, its electrical potential would increase to the point that some electrons would have to move from the top of the material to the bottom, to prevent what's called a"polarization catastrophe." The result is a conductive channel at the juncture with the strontium titanate -- much like the one that forms when a transistor is switched on. So Ashoori and his collaborators decided to measure the capacitance between that channel and a gate electrode on top of the lanthanum aluminate.

They were amazed by what they found: Although their results were somewhat limited by their experimental apparatus, it may be that an infinitesimal change in voltage will cause a large amount of charge to enter the channel between the two materials."The channel may suck in charge -- shoomp! Like a vacuum," Ashoori says."And it operates at room temperature, which is the thing that really stunned us."

Indeed, the material's capacitance is so high that the researchers don't believe it can be explained by existing physics."We've seen the same kind of thing in semiconductors," Ashoori says,"but that was a very pure sample, and the effect was very small. This is a super-dirty sample and a super-big effect." It's still not clear, Ashoori says, just why the effect is so big:"It could be a new quantum-mechanical effect or some unknown physics of the material."

There is one drawback to the system that the researchers investigated: While a lot of charge will move into the channel between materials with a slight change in voltage, it moves slowly -- much too slowly for the type of high-frequency switching that takes place in computer chips. That could be because the samples of the material are, as Ashoori says,"super dirty"; purer samples might exhibit less electrical resistance. But it's also possible that, if researchers can understand the physical phenomena underlying the material's remarkable capacitance, they may be able to reproduce them in more practical materials.

Triscone cautions that wholesale changes to the way computer chips are manufactured will inevitably face resistance."So much money has been injected into the semiconductor industry for decades that to do something new, you need a really disruptive technology," he says.

"It's not going to revolutionize electronics tomorrow," Ashoori agrees."But this mechanism exists, and once we know it exists, if we can understand what it is, we can try to engineer it."


Source

Friday, May 13, 2011

'Surrogates' Aid Design of Complex Parts and Controlling Video Games

The new interactive approach is being used commercially and in research but until now has not been formally defined, and doing so could boost its development and number of applications, said Ji Soo Yi, an assistant professor of industrial engineering at Purdue University.

Conventional computer-aided design programs often rely on the use of numerous menus containing hundreds of selection options. The surrogate interaction uses a drawing that resembles the real object to provide users a more intuitive interface than menus.

The Purdue researchers have investigated the characteristics of surrogate interaction, explored potential ways to use it in design applications, developed software to test those uses and suggested the future directions of the research.

Surrogates are interactive graphical representations of real objects, such as a car or a video game character, with icons on the side labeling specific parts of the figure, said Niklas Elmqvist, a Purdue assistant professor of electrical and computer engineering.

"If you click on one label, you change color, if you drag a border you change its width. Anything you do to the surrogate affects the actual objects you are working with," he said."The way it is now, say I'm working on a car design and wanted to move the rear wheels slightly forward, or I want to change an object's color or thickness of specific parts. I can't make those changes to the drawing directly but have to search in menus and use arcane commands."

Several techniques have been developed over the years to address these issues.

"But they are all isolated and limited efforts with no coherent underlying principle," Elmqvist said."We propose the notion of surrogate interaction to unify other techniques that have been developed. We believe that formalizing this family of interaction techniques will provide an additional and powerful interface design alternative, as well as uncover opportunities for future research."

The approach also allows video gamers to change attributes of animated characters.

"For computer games, especially role playing games, you may have a warrior character that has lots of different armor and equipment," Elmqvist said."Usually you can't interact with the character itself. If you want to put in a new cloak or a sword you have to use this complex system of menus."

Research findings are detailed in a paper presented during the Association for Computing Machinery's CHI Conference on Human Factors in Computing Systems through May 12 in Vancouver, British Columbia. The research paper was written by industrial engineering doctoral student Bum chul Kwon, electrical and computer engineering doctoral student Waqas Javed, Elmqvist and Yi.

Kwon and Yi helped theorize the idea of surrogate interaction with relation to previous models of interaction.

The method also makes it possible to manipulate more than one object simultaneously.

"In computer strategy games you might be moving an army or maybe five infantry soldiers, and you want to take a building," Elmqvist said."Using our technique you would let a surrogate, one soldier, represent all of the soldiers. Any commands you issue for the surrogate applies to all five soldiers."

Current video game technology lacks an easy-to-use method to issue such simultaneous commands to all members of a group.

The method also could be used to make maps interactive.

"In maps, usually you have a legend that says this color means forest and this symbol means railroad tracks and so on," Elmqvist said."You can see these symbols in the map, but you can't interact with them. In the new approach, you have a surrogate of the map, and in this surrogate you can interact with these legends. For example, you could search for interstate highways, bridges, public parks."


Source

Wednesday, May 11, 2011

Virtual Possessions Have Powerful Hold on Teenagers, Researchers Say

The very fact that virtual possessions don't have a physical form may actually enhance their value, researchers at Carnegie Mellon's Human-Computer Interaction Institute (HCII) and School of Design discovered in a study of 21 teenagers. A fuller appreciation of the sentiments people can develop for these bits of data could be factored into technology design and could provide opportunities for new products and services, they said.

"A digital photo is valuable because it is a photo but also because it can be shared and people can comment on it," said John Zimmerman, associate professor of human-computer interaction and design. For the young people in the CMU study, a digital photo that friends have tagged, linked and annotated is more meaningful than a photo in a frame or a drawer.

One of the subjects said she always takes lots of photos at events and uploads them immediately so she and her friends can tag and dish about them."It feels like a more authentic representation of the event," the 16-year-old told the researchers."We comment and agree on everything together… then there's a shared sense of what happened."

The researchers -- Jodi Forlizzi, associate professor of design and human-computer interaction, and William Odom, a Ph.D. student in HCII, along with Zimmerman -- will present their study May 10 at CHI 2011, the Association for Computing Machinery's Conference on Human Factors in Computing Systems in Vancouver. CHI conference leaders awarded the study Best Paper recognition.

The penchant of people to collect and assign meaning to what are often ordinary objects is well known."A house is just a place to keep your stuff while you go out and get more stuff," the comedian George Carlin famously observed. But a lot of stuff that often is cherished -- printed books, photographs, music CDs -- is being replaced by electronic equivalents, such as e-books and iPod downloads. And computers are generating artifacts that have never been stuff -- social networking profiles, online game avatars, Foursquare badges -- but can hold meaning.

For their study, Odom, Zimmerman and Forlizzi recruited nine girls and 12 boys, ages 12-17, from middle- and upper-middle-class families who had frequent access to the Internet, mobile phones and other technology. The researchers interviewed them about their everyday lives, their use of technology and about the physical and virtual possessions that they valued.

If a house is a place to store your stuff, then a mobile phone might be considered a treasure box that gives you access to your stuff, the interviews revealed. The"placelessness" of virtual possessions stored online rather than on a computer often enhanced their value because they were always available. One 17-year-old participant said she uploaded all of her photos online so that she could access them whether she was in her bed or at the mall."Obviously, I can't look at them all and that's not the point," she said."I like knowing that they'll be there if I want them."

The degree to which users can alter and personalize online objects affects their value. A 17-year-old study participant spent a lot of time developing an avatar for the video game Halo and received a lot of comments and input from friends. The original drawings, he said,"are definitely something I'll keep." Accruing"metadata" -- online time stamps, activity logs and annotations -- also enhanced the value of virtual possessions.

Participants noted that they could display things online, such as a photograph of a boyfriend disliked by parents, which were important to their identity but could never be displayed in a bedroom. The online world, in fact, allowed the teenagers to present different facets of themselves to appropriate groups of friends or to family. Developing privacy controls and other tools for determining who gets to see what virtual possessions in which circumstances is both a need and an opportunity for technology developers, the researchers said.

The persistent archiving of virtual possessions sometimes creates real dilemmas, they observed. If users are collectively creating these artifacts -- a tagged and annotated photo, for instance -- then is a consensus of the users necessary for deleting them?

"In the future, our research will explore what happens when the boundaries of virtual and physical possessions are more blurred," Forlizzi said."We will look at things like tags and social metadata and the role they play in sharing experiences with family members and peers."

One opportunity for technology developers, the team said, would be creating technologies that enable users to encode more metadata into their virtual possessions. An example might be aggregating an individual's status updates, songs most listened to and perhaps even news and weather information associated with a particular event.

In some cases, virtual possessions might be given physical form. For instance, Zimmerman said the team has explored transforming digital images of a past event, along with associated tags and annotations, into oversized postcards that could be mailed to the user.

This research was supported by the National Science Foundation and by Google. HCII is a division of Carnegie Mellon's School of Computer Science.


Source

Tuesday, May 10, 2011

Single Atom Stores Quantum Information

Quantum computers will one day be able to cope with computational tasks in no time where current computers would take years. They will take their enormous computing power from their ability to simultaneously process the diverse pieces of information which are stored in the quantum state of microscopic physical systems, such as single atoms and photons. In order to be able to operate, the quantum computers must exchange these pieces of information between their individual components. Photons are particularly suitable for this, as no matter needs to be transported with them. Particles of matter however will be used for the information storage and processing. Researchers are therefore looking for methods whereby quantum information can be exchanged between photons and matter. Although this has already been done with ensembles of many thousands of atoms, physicists at the Max Planck Institute of Quantum Optics in Garching have now proved that quantum information can also be exchanged between single atoms and photons in a controlled way.

Using a single atom as a storage unit has several advantages -- the extreme miniaturization being only one, says Holger Specht from the Garching-based Max Planck Institute, who was involved in the experiment. The stored information can be processed by direct manipulation on the atom, which is important for the execution of logical operations in a quantum computer."In addition, it offers the chance to check whether the quantum information stored in the photon has been successfully written into the atom without destroying the quantum state," says Specht. It is thus possible to ascertain at an early stage that a computing process must be repeated because of a storage error.

The fact that no one had succeeded until very recently in exchanging quantum information between photons and single atoms was because the interaction between the particles of light and the atoms is very weak. Atom and photon do not take much notice of each other, as it were, like two party guests who hardly talk to each other, and can therefore exchange only a little information. The researchers in Garching have enhanced the interaction with a trick. They placed a rubidium atom between the mirrors of an optical resonator, and then used very weak laser pulses to introduce single photons into the resonator. The mirrors of the resonator reflected the photons to and fro several times, which strongly enhanced the interaction between photons and atom. Figuratively speaking, the party guests thus meet more often and the chance that they talk to each other increases.

The photons carried the quantum information in the form of their polarization. This can be left-handed (the direction of rotation of the electric field is anti-clockwise) or right-handed (clock-wise). The quantum state of the photon can contain both polarizations simultaneously as a so-called superposition state. In the interaction with the photon the rubidium atom is usually excited and then loses the excitation again by means of the probabilistic emission of a further photon. The Garching-based researchers did not want this to happen. On the contrary, the absorption of the photon was to bring the rubidium atom into a definite, stable quantum state. The researchers achieved this with the aid of a further laser beam, the so-called control laser, which they directed onto the rubidium atom at the same time as it interacted with the photon.

The spin orientation of the atom contributes decisively to the stable quantum state generated by control laser and photon. Spin gives the atom a magnetic moment. The stable quantum state, which the researchers use for the storage, is thus determined by the orientation of the magnetic moment. The state is characterized by the fact that it reflects the photon's polarization state: the direction of the magnetic moment corresponds to the rotational direction of the photon's polarization, a mixture of both rotational directions being stored by a corresponding mixture of the magnetic moments.

This state is read out by the reverse process: irradiating the rubidium atom with the control laser again causes it to re-emit the photon which was originally incident. In the vast majority of cases, the quantum information in the read-out photon agrees with the information originally stored, as the physicists in Garching discovered. The quantity that describes this relationship, the so-called fidelity, was more than 90 percent. This is significantly higher than the 67 percent fidelity that can be achieved with classical methods, i.e. those not based on quantum effects. The method developed in Garching is therefore a real quantum memory.

The physicists measured the storage time, i.e. the time the quantum information in the rubidium can be retained, as around 180 microseconds."This is comparable with the storage times of all previous quantum memories based on ensembles of atoms," says Stephan Ritter, another researcher involved in the experiment. Nevertheless, a significantly longer storage time is necessary for the method to be used in a quantum computer or a quantum network. There is also a further quality characteristic of the single-atom quantum memory from Garching which could be improved: the so-called efficiency. It is a measure of how many of the irradiated photons are stored and then read out again. This was just under 10 percent.

The storage time is mainly limited by magnetic field fluctuations from the laboratory surroundings, says Ritter."It can therefore be increased by storing the quantum information in quantum states of the atoms which are insensitive to magnetic fields." The efficiency is limited by the fact that the atom does not sit still in the centre of the resonator, but moves. This causes the strength of the interaction between atom and photon to decrease. The researchers can thus also improve the efficiency: by greater cooling of the atom, i.e. by further reducing its kinetic energy.

The researchers at the Max Planck Institute in Garching now want to work on these two improvements."If this is successful, the prospects for the single-atom quantum memory would be excellent," says Stephan Ritter. The interface between light and individual atoms would make it possible to network more atoms in a quantum computer with each other than would be possible without such an interface; a fact that would make such a computer more powerful. Moreover, the exchange of photons would make it possible to quantum mechanically entangle atoms across large distances. The entanglement is a kind of quantum mechanical link between particles which is necessary to transport quantum information across large distances. The technique now being developed at the Max Planck Institute of Quantum Optics could some day thus become an essential component of a future"quantum Internet."


Source

Monday, May 9, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Sunday, May 8, 2011

Transistors Reinvented Using New 3-D Structure

The three-dimensional Tri-Gate transistors represent a fundamental departure from the two-dimensional planar transistor structure that has powered not only all computers, mobile phones and consumer electronics to-date, but also the electronic controls within cars, spacecraft, household appliances, medical devices and virtually thousands of other everyday devices for decades.

"Intel's scientists and engineers have once again reinvented the transistor, this time utilizing the third dimension," said Intel President and CEO Paul Otellini."Amazing, world-shaping devices will be created from this capability as we advance Moore's Law into new realms."

Scientists have long recognized the benefits of a 3-D structure for sustaining the pace of Moore's Law as device dimensions become so small that physical laws become barriers to advancement. The key to this latest breakthrough is Intel's ability to deploy its novel 3-D Tri-Gate transistor design into high-volume manufacturing, ushering in the next era of Moore's Law and opening the door to a new generation of innovations across a broad spectrum of devices.

Moore's Law is a forecast for the pace of silicon technology development that states that roughly every 2 years transistor density will double, while increasing functionality and performance and decreasing costs. It has become the basic business model for the semiconductor industry for more than 40 years.

Unprecedented Power Savings and Performance Gains

Intel's 3-D Tri-Gate transistors enable chips to operate at lower voltage with lower leakage, providing an unprecedented combination of improved performance and energy efficiency compared to previous state-of-the-art transistors. The capabilities give chip designers the flexibility to choose transistors targeted for low power or high performance, depending on the application.

The 22nm 3-D Tri-Gate transistors provide up to 37 percent performance increase at low voltage versus Intel's 32nm planar transistors. This incredible gain means that they are ideal for use in small handheld devices, which operate using less energy to"switch" back and forth. Alternatively, the new transistors consume less than half the power when at the same performance as 2-D planar transistors on 32nm chips.

"The performance gains and power savings of Intel's unique 3-D Tri-Gate transistors are like nothing we've seen before," said Mark Bohr, Intel Senior Fellow."This milestone is going further than simply keeping up with Moore's Law. The low-voltage and low-power benefits far exceed what we typically see from one process generation to the next. It will give product designers the flexibility to make current devices smarter and wholly new ones possible. We believe this breakthrough will extend Intel's lead even further over the rest of the semiconductor industry."

Continuing the Pace of Innovation -- Moore's Law

Transistors continue to get smaller, cheaper and more energy efficient in accordance with Moore's Law -- named for Intel co-founder Gordon Moore. Because of this, Intel has been able to innovate and integrate, adding more features and computing cores to each chip, increasing performance, and decreasing manufacturing cost per transistor.

Sustaining the progress of Moore's Law becomes even more complex with the 22nm generation. Anticipating this, Intel research scientists in 2002 invented what they called a Tri-Gate transistor, named for the three sides of the gate. This announcement follows further years of development in Intel's highly coordinated research-development-manufacturing pipeline, and marks the implementation of this work for high-volume manufacturing.

The 3-D Tri-Gate transistors are a reinvention of the transistor. The traditional"flat" two-dimensional planar gate is replaced with an incredibly thin three-dimensional silicon fin that rises up vertically from the silicon substrate. Control of current is accomplished by implementing a gate on each of the three sides of the fin -- two on each side and one across the top -- rather than just one on top, as is the case with the 2-D planar transistor. The additional control enables as much transistor current flowing as possible when the transistor is in the"on" state (for performance), and as close to zero as possible when it is in the"off" state (to minimize power), and enables the transistor to switch very quickly between the two states (again, for performance).

Just as skyscrapers let urban planners optimize available space by building upward, Intel's 3-D Tri-Gate transistor structure provides a way to manage density. Since these fins are vertical in nature, transistors can be packed closer together, a critical component to the technological and economic benefits of Moore's Law. For future generations, designers also have the ability to continue growing the height of the fins to get even more performance and energy-efficiency gains.

"For years we have seen limits to how small transistors can get," said Moore."This change in the basic structure is a truly revolutionary approach, and one that should allow Moore's Law, and the historic pace of innovation, to continue."

World's First Demonstration of 22nm 3-D Tri-Gate Transistors

The 3-D Tri-Gate transistor will be implemented in the company's upcoming manufacturing process, called the 22nm node, in reference to the size of individual transistor features. More than 6 million 22nm Tri-Gate transistors could fit in the period at the end of this sentence.

Intel has demonstrated the world's first 22nm microprocessor, codenamed"Ivy Bridge," working in a laptop, server and desktop computer. Ivy Bridge-based Intel® Core™ family processors will be the first high-volume chips to use 3-D Tri-Gate transistors. Ivy Bridge is slated for high-volume production readiness by the end of this year.

This silicon technology breakthrough will also aid in the delivery of more highly integrated Intel® Atom™ processor-based products that scale the performance, functionality and software compatibility of Intel® architecture while meeting the overall power, cost and size requirements for a range of market segment needs.


Source

Saturday, May 7, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Friday, May 6, 2011

EEG Headset With Flying Harness Lets Users 'Fly' by Controlling Their Thoughts

Creative director and Rensselaer MFA candidate Yehuda Duenyas describes the"Infinity Simulator" as a platform similar to a gaming console -- like the Wii or the Kinect -- writ large.

"Instead of you sitting and controlling gaming content, it's a whole system that can control live elements -- so you can control 3-D rigging, sound, lights, and video," said Duenyas, who works under the moniker"xxxy.""It's a system for creating hybrids of theater, installation, game, and ride."

Duenyas created the"Infinity Simulator" with a team of collaborators, including Michael Todd, a Rensselaer 2010 graduate in computer science. Duenyas will exhibit the new system in the art installation"The Ascent" on May 12 at Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC).

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd.

Within the theater, the rigging -- including the harness -- is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The"Infinity Simulator," a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

"We've built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it," said Duenyas."The 'Infinity Simulator' is the center; everything talks to the 'Infinity Simulator.'"

The May 12"The Ascent" installation is only one experience made possible by the new platform, Duenyas said.

"'The Ascent' embodies the maiden experience that we'll be presenting," Duenyas said."But we've found that it's a versatile platform to create almost any type of experience that involves rigging, video, sound, and light. The idea is that it's reactive to the users' body; there's a physical interaction."

Duenyas, a Brooklyn-based artist and theater director, specializes in experiential theater performances.

"The thing that I focus on the most is user experience," Duenyas said."All the shows I do with my theater company and on my own involve a lot of set and set design -- you're entering into a whole world. You're having an experience that is more than going to a show, although a show is part of it."

The"Infinity Simulator" stemmed from an idea Duenyas had for such a theatrical experience.

"It started with an idea that I wanted to create a simulator that would give people a feeling of infinity," Duenyas said. His initial vision was that of a room similar to a Cave Automated Virtual Environment -- a room paneled with projection screens -- in which participants would be able to float effortlessly in an environment intended to evoke a glimpse into infinity.

At Rensselaer, Duenyas took advantage of the technology at hand to explore his idea, first with a video game he developed in 2010, then -- working through the Department of the Arts -- with EMPAC's computer-controlled 3-D theatrical flying harness.

"The charge of the arts department is to allow the artists that they bring into the department to use technology to enhance what they've been doing already," Duenyas said."In coming here (EMPAC), and starting to translate our ideas into a physical space, so many different things started opening themselves up to us."

The 2010 video game, also developed with Todd, tracked the movements -- pitch and yaw -- of players suspended in a custom-rigged harness, allowing players to soar through simulated landscapes. Duenyas said that that game (also called the"Infinity Simulator") and the new platform are part of the same vision.

EMPAC Director Johannes Goebel saw the game on display at the 2010 GameFest and discussed the custom-designed 3-D theatrical flying rig in EMPAC with Duenyas. Working through the Arts Department, Duenyas submitted a proposal to work with the rig, and his proposal was accepted.

Duenyas and his team experimented -- first gaining peripheral control over the system, and then linking it to the EEG headset -- and created the Ascent installation as an initial project. In the installation, the Infinity Simulator is programmed to respond to relaxation.

"We're measuring two brain states -- alpha and theta -- waking consciousness and everyday brain computational processing," said Duenyas."If you close your eyes and take a deep breath, that processing power decreases. When it decreases below a certain threshold, that is the trigger for you to elevate."

As a user rises, their ascent triggers a changing display of lights, sound, and video. Duenyas said he wants to hint at transcendental experience, while keeping the door open for a more circumspect interpretation.

"The point is that the user is trying to transcend the everyday and get into this meditative state so they can have this experience. I see it as some sort of iconic spiritual simulator. That's the serious side," he said."There's also a real tongue-in-cheek side of my work: I want clouds, I want Terry Gilliam's animated fist to pop out of a cloud and hit you in the face. It's mixing serious religious symbology, but not taking it seriously."

The humor is prompted, in part, by the limitations of this earliest iteration of Duenyas' vision.

"It started with, 'I want to have a glimpse of infinity,' 'I want to float in space.' Then you get in the harness and you're like 'man, this harness is uncomfortable,'" he said."In order to achieve the original vision, we had to build an infrastructure, and I still see development of the infinity experience is a ways off; but what we can do with the infrastructure in a realistic time frame is create 'The Ascent,' which is going to be really fun, and totally other."

Creating the"Infinity Simulator" has prompted new possibilities.

"The vision now is to play with this fun system that we can use to build any experience," he said."It's sort of overwhelming because you could do so many things -- you could create a flight through cumulus clouds, you could create an augmented physicality parkour course where you set up different features in the room and guide yourself to different heights. It's limitless."


Source

Thursday, May 5, 2011

Evolutionary Lessons for Wind Farm Efficiency

Senior Lecturer Dr Frank Neumann, from the School of Computer Science, is using a"selection of the fittest" step-by-step approach called"evolutionary algorithms" to optimise wind turbine placement. This takes into account wake effects, the minimum amount of land needed, wind factors and the complex aerodynamics of wind turbines.

"Renewable energy is playing an increasing role in the supply of energy worldwide and will help mitigate climate change," says Dr Neumann."To further increase the productivity of wind farms, we need to exploit methods that help to optimise their performance."

Dr Neumann says the question of exactly where wind turbines should be placed to gain maximum efficiency is highly complex."An evolutionary algorithm is a mathematical process where potential solutions keep being improved a step at a time until the optimum is reached," he says.

"You can think of it like parents producing a number of offspring, each with differing characteristics," he says."As with evolution, each population or 'set of solutions' from a new generation should get better. These solutions can be evaluated in parallel to speed up the computation."

Other biology-inspired algorithms to solve complex problems are based on ant colonies.

"Ant colony optimisation" uses the principle of ants finding the shortest way to a source of food from their nest.

"You can observe them in nature, they do it very efficiently communicating between each other using pheromone trails," says Dr Neumann."After a certain amount of time, they will have found the best route to the food -- problem solved. We can also solve human problems using the same principles through computer algorithms."

Dr Neumann has come to the University of Adelaide this year from Germany where he worked at the Max Planck Institute. He is working on wind turbine placement optimisation in collaboration with researchers at the Massachusetts Institute of Technology.

"Current approaches to solving this placement optimisation can only deal with a small number of turbines," Dr Neumann says."We have demonstrated an accurate and efficient algorithm for as many as 1000 turbines."

The researchers are now looking to fine-tune the algorithms even further using different models of wake effect and complex aerodynamic factors.


Source

Wednesday, May 4, 2011

Revolutionary New Paper Computer Shows Flexible Future for Smartphones and Tablets

"This is the future. Everything is going to look and feel like this within five years," says creator Roel Vertegaal, the director of Queen's University Human Media Lab."This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen."

The smartphone prototype, called PaperPhone is best described as a flexible iPhone -- it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

Dr. Vertegaal will unveil his paper computer on May 10 at 2 pm at the Association of Computing Machinery's CHI 2011 (Computer Human Interaction) conference in Vancouver -- the premier international conference of Human-Computer Interaction.

Being able to store and interact with documents on larger versions of these light, flexible computers means offices will no longer require paper or printers.

"The paperless office is here. Everything can be stored digitally and you can place these computers on top of each other just like a stack of paper, or throw them around the desk" says Dr. Vertegaal.

The invention heralds a new generation of computers that are super lightweight, thin-film and flexible. They use no power when nobody is interacting with them. When users are reading, they don't feel like they're holding a sheet of glass or metal.

An article on a study of interactive use of bending with flexible thinfilm computers is to be published at the conference in Vancouver, where the group is also demonstrating a thinfilm wristband computer called Snaplet.

The development team included researchers Byron Lahey and Win Burleson of the Motivational Environments Research Group at Arizona State University (ASU), Audrey Girouard and Aneesh Tarun from the Human Media Lab at Queen's University, Jann Kaminski and Nick Colaneri, director of ASU's Flexible Display Center, and Seth Bishop and Michael McCreary, the VP R&D of E Ink Corporation.

For more information, articles, videos, and high resolution photos, visithttp://www.humanmedialab.org/paperphone/andhttp://www.youtube.com/watch?v=Rl-qygUEE2c


Source

Tuesday, May 3, 2011

College Students' Use of Kindle DX Points to E-Reader’s Role in Academia

The UW last year was one of seven U.S. universities that participated in a pilot study of the Kindle DX, a larger version of the popular e-reader. UW researchers who study technology looked at how students involved in the pilot project did their academic reading.

"There is no e-reader that supports what we found these students doing," said first author Alex Thayer, a UW doctoral student in Human Centered Design and Engineering."It remains to be seen how to design one. It's a great space to get into, there's a lot of opportunity."

Thayer is presenting the findings in Vancouver, B.C. at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, where the study received an honorable mention for best paper.

"Most e-readers were designed for leisure reading -- think romance novels on the beach," said co-author Charlotte Lee, a UW assistant professor of Human Centered Design and Engineering."We found that reading is just a small part of what students are doing. And when we realize how dynamic and complicated a process this is, it kind of redefines what it means to design an e-reader."

Some of the other schools participating in the pilot project conducted shorter studies, generally looking at the e-reader's potential benefits and drawbacks for course use. The UW study looked more broadly at how students did their academic reading, following both those who incorporated the e-reader into their routines and those who did not.

"We were not trying to evaluate the device, per se, but wanted to think long term, really looking to the future of e-readers, what are students trying to do, how can we support that," Lee said.

The researchers interviewed 39 first-year graduate students in the UW's Department of Computer Science& Engineering, 7 women and 32 men, ranging from 21 to 53 years old.

By spring quarter of 2010, seven months into the study, less than 40 percent of the students were regularly doing their academic reading on the Kindle DX. Reasons included the device's lack of support for taking notes and difficulty in looking up references. (Amazon Corp., which makes the Kindle DX, has since improved some of these features.)

UW researchers continued to interview all the students over the nine-month period to find out more about their reading habits, with or without the e-reader. They found:

  • Students did most of the reading in fixed locations: 47 percent of reading was at home, 25 percent at school, 17 percent on a bus and 11 percent in a coffee shop or office.
  • The Kindle DX was more likely to replace students' paper-based reading than their computer-based reading.
  • Of the students who continued to use the device, some read near a computer so they could look up references or do other tasks that were easier to do on a computer. Others tucked a sheet of paper into the case so they could write notes.
  • With paper, three quarters of students marked up texts as they read. This included highlighting key passages, underlining, drawing pictures and writing notes in margins.
  • A drawback of the Kindle DX was the difficulty of switching between reading techniques, such as skimming an article's illustrations or references just before reading the complete text. Students frequently made such switches as they read course material.
  • The digital text also disrupted a technique called cognitive mapping, in which readers used physical cues such as the location on the page and the position in the book to go back and find a section of text or even to help retain and recall the information they had read.

Lee predicts that over time software will help address some of these issues. She even envisions niche software that could support reading styles specific to certain disciplines.

"You can imagine that a historian going through illuminated texts is going to have very different navigation needs than someone who is comparing algorithms," Lee said.

It's likely that desktop computers, laptops, tablet computers and yes, even paper, will play a role in academic reading's future. But the authors say e-readers will also find their place. Thayer imagines the situation will be similar to today's music industry, where mp3s, CDs and LPs all coexist in music-lovers' listening habits.

"E-readers are not where they need to be in order to support academic reading," Lee concludes. But asked when e-readers will reach that point, she predicts:"It's going to be sooner than we think."

Other co-authors are Linda Hwang, Heidi Sales, Pausali Sen and Ninad Dalal of the UW.


Source