Posts byBrian Lambson

Science vs. 11-year-olds, round 2

For the second straight year, Stony Brook University’s Center for Communicating Science is hosting a competition called the Flame Challenge. The goal is for scientists to answer a question about nature in a way that is interesting and informative to an 11-year-old. In fact, 11-year-old schoolchildren are the judges (although professional scientists screen submissions for technical accuracy). The name of the challenge comes from last year’s inaugural question: What is a flame? This year, the question digs even deeper: What is time?

I have to admit, it sounds really hard. Unlike last year’s question, this year’s question has an added degree of difficulty because it probes the cutting edge of physicists’ understanding of the universe. The winner of last year’s Flame Challenge was a seven-and-a-half minute animation in which atoms are depicted by Legos and chemical reactions by boxing fights. The video is punctuated by a short, tuneful rock song that summarizes the physics and terminology that describe a flame. I thought the video was great once it got past its not-so-eloquent description of atoms (“Everything is made with tiny things called atoms, and these things are the building blocks that make up everything“). Dare I admit that I might have even learned a thing or two about pyrolysis and chemiluminscence? Still, I’m a sucker for infographics, so my vote would have gone to this finalist.

Nobel recap: Cal alum again takes home physics prize

The streak is at two and counting. One year after Berkeley Lab scientist Saul Perlmutter took home the Nobel Prize in Physics, another Cal grad has earned himself a seat of honor at Stockholm’s Nobel ceremony this December. This year, the winner is David Wineland, a former Berkeley physicist who went on to conduct groundbreaking research on quantum physics at the National Institute for Standards and Technology (NIST).

Wineland earned the award for the development and application of a technique for manipulating atoms with light in the 1980s and 1990s. When an atom is confined in a small trap and shielded from interacting with other atoms, its quantum behavior can be probed experimentally. Phenomena like the one famously described by Schrödinger’s Cat, originally conceived as a thought experiment, can be directly observed. However, experiments on isolated atoms proved elusive for many years. The challenge for experimentalists is twofold: to create a trap strong enough and isolated enough for the atom to display quantum behavior and then to observe that behavior. Wineland addressed this problem by using electric fields to confine an atom to a small region within a vacuum chamber, where it was far removed from any other atoms that might disturb it. Then, to measure the behavior of the atom, examined how it interacted with light. His efforts pioneered the field now known as quantum optics.

Botanical Garden’s SOL Grotto reignites national Solyndra debate

What do you get when you combine a promising technological innovation, a devastating bankruptcy that set back an entire industry by years, and a political flash point during the heart of election season? Artwork, apparently – highly controversial artwork. From the ashes of Solyndra, a new art installation has risen at the UC Berkeley Botanical garden, built from the same glass tubes that once defined Solyndra’s unique approach to solar electricity generation. For a company that had always been regarded as symbolic, both in success and failure, it seems an oddly appropriate tribute on the one year anniversary of the company’s demise.

Ferroelectric materials under the microscope

Materials science began by studying the way a substance’s chemical makeup determines its properties. Recently, however, scientists have come to realize there is more to a material than its composition: changes in the shape of many materials at the nanometer-scale can produce startling changes in their behavior. But there remains one class of materials that has been frustratingly difficult to pattern into nanosized particles without destroying the property that makes them unique: ferroelectrics. New research conducted by materials scientists at Lawrence Berkeley National Laboratory and UC Berkeley tackles the open questions surrounding nanosized ferroelectic materials with an array of cutting edge experimental techniques. Their findings, published last week in the journal Nature Materials, indicate that there may be light at the end of the tunnel for ferroelectrics after all.

Technology deployment in the developing world: Myth versus reality

The narrative usually goes something like this: a naïve scientist invents a clever piece of technology designed to improve the quality of life in the developing world, only to be shocked and horrified when it is not wholly embraced by the people it was designed to help. Of course, such characterizations could not be further from the truth. Many scientists, especially those working on technologies for the developing world, take on challenges precisely because of their desire to address human-level complexity. The fact that difficulties arise on occasion comes as neither a surprise nor a disappointment to them. It’s just part of the job.

Welcome back to the lab, Professor Birgeneau

Recently, Chancellor Robert Birgeneau announced that he will be stepping down as UC Berkeley chancellor and returning to the lab as a physics and materials science professor. In his announcement letter to the campus, he wrote that he hopes to “have one more truly significant physics/materials science experiment still to come in my academic career.” But after his extended hiatus from research, does Birgeneau have what it takes to return to the cutting edge of these two fast-moving fields? To find out, I spent some time poring through his C.V. and research homepage for a closer look at the caliber of researcher he was before becoming an administrator.

Birgeneau’s research career spanned four decades, starting with his graduate studies at Yale in the mid-1960’s and lasting until he become president of the University of Toronto in 2000. He published his first academic paper in 1964 in Physical Review while he was a summer intern at Chalk River Nuclear Laboratories in Ontario, Canada. According to the paper’s abstract, he used quantum theoretical calculations and inelastic neutron scattering measurements to characterize the thermodynamic properties of nickel and compared them to those of copper. Titled “Normal modes of vibration in nickel”, the paper has been cited an impressive 167 times – not bad for a first-time author.

Bell Labs, an academic paradise?

Those of you who are well-versed in the history of science and technology have probably heard of Bell Labs, especially if you’re an electrical engineering student like me. For most of the 20th century, Bell Labs was firmly positioned at the cutting edge of technological innovation — so much so that one might even say that it was the cutting edge. You don’t have to take my word for it; just scan through a list of its 10 most important innovations. Recognize any of them?

I bring this up because last weekend, the New York Times ran a fascinating article about the legendary institution. The author, Jon Gertner, argues that Bell Labs embodied an organizational model for innovation that is different and in some ways better than today’s model (Bell Labs is no longer active in the basic sciences). Whereas today’s “innovative” companies are set up to churn out incrementally differentiated consumer products faster than their competitors (Gertner is pointing at you, app developers), Bell Labs was set up to achieve the types of huge technological leaps that required years or decades of concentrated efforts by the world’s top scientists. Much of what we perceive as modern technological progress really stems from these great advances.

Notes from the field: Brazil’s Atlantic Forest

If I told you that I recently travelled to Brazil and saw a number of the world’s most unique plants and animals while trekking through a densely vegetated, humid environment, you’d probably assume I was referring to a hike through the Amazon rainforest, right? That’s because, for good reason, the Amazon attracts quite a bit of international attention — its struggles against deforestation and pollution make their way onto the pages of mainstream news outlets frequently. Meanwhile, we rarely, if ever, hear about the determined fight for the survival of Brazil’s other great forest: the Atlantic Forest on Brazil’s southeast coast. But the Atlantic Forest is exactly where I found myself over winter break, and while my excursion into a protected area of the forest accounted for just one day of the two-week long trip, its beauty and fragility left a lasting impression on me and my fellow travelers.

The main thing to know about the Atlantic Forest is that it is currently about 85% smaller than it was five centuries ago, yet it still houses nearly as much biodiversity as the much larger Amazon rainforest to the north. A large fraction, up to 40%, of its biodiversity cannot be found anywhere else on Earth. Extensive deforestation and fragmentation has made it necessary for NGO’s to step in and help maintain the vibrancy of the now fragile ecosystem. For example, my excursion was guided by members of a non-profit organization called Projeto Juçara, whose mission is to protect the Juçara palm tree. Juçara (juice-ARE-uh) palms are heavily poached for their edible palm hearts, and over time the trees have become one of the many endangered species residing in the Atlantic Forest. As my companions and I walked through the forest, we scattered Juçara seeds alongside the path in the hope that we could help restore the tree to healthy numbers in that area of the forest.

Computer scientists take on cancer research

Computer scientists have made great strides towards shedding their stereotypical “nerd” image in the last decade. By facilitating the meteoric rise of social media, smart phones, and streaming entertainment, it seems that they have finally unlocked the secret to doing things that mainstream society finds useful, intuitive and exciting. More than ever before, we enjoy the fruits of their labor and actually feel cool doing it. It begs the question: where will they be taking their talents next?

For one group of computer scientists here at Berkeley, the answer is as bold as it is simple: they will be curing cancer. In an essay published in the New York Times, one of the team’s leaders, David Patterson, writes, “Computer scientists may have the best skills to fight cancer in the next decade — and they should be signing up in droves.” The team — which currently consists of about 50 researchers in the electrical engineering and computer sciences department — is called the Algorithms, Machines, and People Laboratory (AMP Lab). Their name reflects their goal of using computers to extract meaning from extraordinarily large and complex collections of information about people, such as clinical data or genetic code. An accurate analysis of the available information could result in prescriptive actions that mitigate or even cure diseases like cancer.

Advanced manufacturing: Here, there, and everywhere

Little did I know that I am a pawn in a giant winner-takes-all chess match that has been playing out between the U.S. and China over the last decade or so. If you are a Berkeley grad student in engineering or science, there’s a pretty good chance you are, too.

At least, that’s one of the impressions I came away with after attending yesterday’s Advanced Manufacturing Partnership (AMP) Regional Meeting here on campus. The public workshop — organized by both Berkeley and Stanford — served as a brainstorming session for ways to shape the recently announced AMP program, a $500 million initiative created to revitalize America’s manufacturing sector. The lineup of speakers and panelists included numerous VIP’s from industry, government, and academia who expressed their commitment to implementing solutions to the decline of manufacturing in the U.S. through AMP.

The 2011 Bay Area Science Festival has arrived

No need to play this one up. The 2011 Bay Area Science Festival is underway, and we at the BSR blog are thrilled to be here to cover it.

The Bay Area Science Festival is a series of fun and educational science events held throughout the Bay Area between now and November 6th. All events are free and open to the public. A full calendar of events can be found here, and a selection of events run by Science@Cal (UC Berkeley’s hub for science outreach activities) can be found here.

We hope this is not the case, but if you can only make one event, make sure it’s the festival’s grand finale, Discovery Days at AT&T Park on November 6th. Discovery Days will be a one-of-a-kind extravaganza celebrating science, imagination, and our understanding of the natural world, and you don’t want to miss out on all the fun.

BSR blog writers will be on hand for many of the events, so keep an eye out over the next two weeks for our coverage of the festival. We hope to see you there!

Live Blog: 2011 BERC Innovation Expo

Here I am, standing in the main atrium of the Berkeley Art Museum, completely surrounded by brilliant masterpieces of priceless beauty. No, I’m not referring to the museum’s collection of Caravaggio paintings nor its stunning exhibition of Himalayan sculptures. I’m talking about the poster session of the BERC Innovation Expo, the opening event of the 2011 BERC Energy Symposium. Hundreds of academic researchers, entrepreneurs, teachers and everyone in between have converged here tonight to network and promote outside-the-box ideas that will help solve our current global energy problems.

At about 7:30 PM, Jigar Shah, CEO of Carbon War Room, will take the stage of the Museum Theater to deliver the evening’s keynote address. Until then, I’ll circulate the floor of the poster session and post live updates about all the thought-provoking work on display. There’s a ton of ground to cover: solar, nuclear, biofuels — you name it, it’s here. So let’s get started, shall we?

Lights out in Afghanistan: U.S. re-engineering efforts fall short


Returning to the subject of Liz Boatman’s recent post about the importance of ethics training in engineering, consider the following scenario. A major construction project worth hundreds of millions of dollars is in the planning stages. The project, if it is carried out, will be paid for with federal funds. In deciding whether or not to proceed with a particular option, do you (a) rely on a report produced by a firm that stands to earn tens of millions of dollars if a certain option is selected or (b) carry out an independent inquiry to determine the best course of action? The correct answer should be obvious, but USAID, facing a similar decision during rebuilding efforts in war-torn Afghanistan, inexplicably went with option (a). It’s not surprising what happened next.

In this month’s issue of IEEE Spectrum, executive editor Glenn Zorpette reports on the failure of the U.S. to modernize Afghanistan’s electrical infrastructure despite spending tens of billions of taxpayer dollars on the effort over the last eight years. The centerpiece of his article is the Tarakhil power plant outside of Kabul. Tarakhil is a large diesel-fueled generator that was constructed between 2006 and 2010 under the direction of USAID. Massively over budget and years behind schedule when completed, it currently generates virtually no electricity. Why not? It turns out, diesel power plants are extremely expensive to operate, especially when the fuel must be transported through the mountains of Afghanistan. If Tarakhil were to operate at full capacity, its annual costs would be about one third of Afghanistan’s total tax revenue. Although cheaper options like hydroelectric generation would have made infinitely more sense economically, USAID opted for diesel largely because it was backed by a study carried out by engineering firm Black & Veatch. The problem: Black & Veatch, as the primary contractor for the project, was to earn a fixed percentage of the total project cost as profit, as dictated by the terms of their “cost-plus” contract with USAID. In other words, the more expensive the better, at least for Black & Veatch’s bottom line. Had USAID conducted their own study, they almost certainly would have gone with a cheaper electricity source.

Electronic tattoos

As nanoscientists, we often become so engrossed in the task of shrinking our devices that we neglect to pursue ideas that involve relatively large components. At our worst, our attitude can be summed up as, “If it’s visible to the naked eye, then it is too simple to bother with.”

So a recent paper that demonstrates, for the first time, “electronic tattoos” for biomedical sensing applications comes as a surprise and something of a wake-up call to the nanoscience community. The paper, published last week in the journal Science and written by a team led by John Rogers of the University of Illinois, is notable for its lack of nanotubes, quantum dots, scanning electron micrographs, or any of the other hallmarks of a modern scientific paper about electronic devices. Instead, the majority of the figures in the paper are photographs taken through the same optical microscopes that you’d find in a typical middle school classroom. Heck, a skilled surgeon could probably have pieced together the device by hand. And yet, despite its low degree of difficulty from a technological standpoint, their work has taken the blogosphere by storm and is one of the most exciting results I personally have seen all year.

Should Americans make stuff? Do we really need to ask?

John Henry. Paul Bunyan. Rosie the Riveter. Many of the American icons of the past have something in common: they made things. And for a long time, the rest of the country followed suit, becoming the world’s leading producer of manufactured goods throughout the 20th century.

Rosie the RiveterToday, the story has changed. Manufacturing jobs are increasingly being outsourced—and not just because other countries offer cheaper labor. Positions requiring both high and low skilled workers have flowed steadily out of the country for the last few decades, to the point that China and Germany have recently surpassed the U.S. as the world’s top exporters.

An argument can be made that there is nothing fundamentally wrong with having less domestic manufacturing. Indeed, American corporations like IBM are often applauded for transitioning from manufacturing to services because of the opportunity to earn higher profit margins. This sentiment is even more visible at the level of individuals: doctors, lawyers and movie stars are among the wealthiest members of society, even though they don’t make anything tangible. If providing services works for individuals and corporations, it would seem that the U.S. could likewise pay its bills (and then some) as a global service provider in return for manufactured goods.

Reaching for the sun

Most of the time, doing something 28% efficiently is nothing to be proud of. But in photovoltaics, it is good enough to set a new world efficiency record—and earn the company that made the record-setting devices millions of dollars in venture capital.

In this case, the lucky company is Alta Devices, which until recently held a low profile in the crowded solar industry. For the last few years, Alta has been quietly developing a novel process for cheaply manufacturing ultra-efficient solar cells. Their hard work appears to be paying off, because last month the company reported that they can make cells that are 28.2% efficient at converting solar energy into electricity. This beats the previous record by more than one percent, a wide margin by the solar industry’s standards.

Alta’s solar cells are made of a semiconducting material called gallium arsenide. Because this is a relatively expensive material, most solar companies use cheaper, less efficient alternatives such as silicon. But Alta came up with a transfer process that allows them to lay gallium arsenide layers that are only one micron thick (about 100 times thinner than a sheet of paper) on each cell.  The thin layers don’t use up much material so they don’t cost very much, but they still generate as much electricity as a thicker layer would.  The company’s goal is to generate electricity at a cost of under 50 cents per watt, which is about twice as cost-effective as the current state-of-the-art.

BSR Issue 20: One week left to vote for the Reader’s Choice Award

It’s a tight race to the finish in the BSR Spring 2011 Reader’s Choice Award! Voting ends next Friday, June 17, so hurry up and cast your vote here to help your favorite article rise to the top spot.

If you haven’t gotten around to reading the current issue yet (it’s available online here or in print at multiple campus locations), consider getting started with the excerpt posted below, from “What’s the Antimatter” by Denia Djokic (p. 8).  This week, the stunning experimental results featured in Denia’s article were published in the journal Nature Physics.  Remember, you heard it here first!



For many, the word “antimatter” elicits images of the Starship Enterprise ripping through space faster than the speed of light, or canisters of tiny glowing balls threatening to obliterate Vatican City. Scientific inaccuracies in popular culture aside, the prospect of isolating antimatter, which annihilates in a burst of light upon contact with matter, has eluded physicists for decades. And yet, this is just what a group of scientists working at CERN, the European Organization for Nuclear Research, recently succeeded in doing. Several months ago, the international ALPHA (Antihydrogen Laser Physics Apparatus) collaboration, which includes many researchers from UC Berkeley and Lawrence Berkeley National Laboratory, managed to create and, more importantly, capture 38 antihydrogen atoms for about one sixth of a second—an eternity in the world of subatomic particles. This exciting breakthrough will allow physicists to study matter’s counterpart in detail and will ultimately deepen, and possibly fundamentally change, our understanding of the origins of the universe. The first question at hand: why is our universe made almost entirely of matter and not antimatter?

Fin-ally! Berkeley invention to net Intel billions

The transistor, rich in both cutting-edge physics and practical applications, is one of my favorite topics to write about (see BSR issue 17 for my feature article on transistors).  So when Intel Corporation, whose best-selling computer chips each contain billions of transistors, announced this month that it had radically reengineered the transistor for its next generation of microprocessors, I immediately knew my next blog topic was in the bag.

Berkeley EECS Device Group

My choice of topic was especially easy because Intel’s new transistor design was invented right here at Berkeley, by a group of electrical engineering professors and graduate students in the late 1990s/early 2000s. Back then, microchip manufacturers were worried that their traditional method for improving transistors – by shrinking them – could only continue until the end of the decade, or perhaps slightly longer.  Transistor dimensions would reach a point that they simply could not be made any smaller without degrading their functionality.

BSR Issue 20: Letter from the editor

In case you missed our announcement on Wednesday, the Berkeley Science Review has released it’s 20th issue, and it’s a good one!  Look for it around campus or read it online, and don’t forget to submit your vote here for the Reader’s Choice Award.  For more on what’s inside, have a look at the Editor in Chief’s letter to the readers, reproduced below.


Berkeley Science Review Issue 20Dear readers,

Welcome to the 20th issue of the Berkeley Science Review and our tenth anniversary. To celebrate, I decided to rummage back through our archives and spend some time with Issue 1. It was remarkable to see how little has changed, including, coincidentally, an enduring fascination with volcanoes (Issue 1 p. 7, Issue 20 p. 26), hormonal regulation of sexual behavior (Issue 1 p. 5, Issue 20 p. 9), and solar storms (Issue 1 p. 4, Issue 20 p. 6), phenomena which will continue to fill Berkeley labs and our pages for many years to come. Nevertheless, I am very proud to say that we have come a long way from the black, white, and blue of the first edition. I would like to dedicate this issue to the memory of Eran Karmon, our founding Editor in Chief, who, by starting a science magazine that anyone on campus could pick up and enjoy, planted a seed which has since flowered into an expanding voice for research with a reach far beyond Bancroft and Hearst.

I think, therefore I move

Mind reading has come a long way from its ignominious origins alongside the likes of fortune telling and witchcraft. Scientists and medical doctors have made great strides in their ability to extract and interpret electromagnetic signals from the brain, and unlike mind readers of the past, they have very real practical gains to show for it. One notable success story is the cochlear implant, which is currently in use by nearly a quarter of a million deaf or hard-of-hearing patients.  (For a look at more state-of-the-art applications in the field, consider attending the upcoming California Cognitive Science Conference, featured on our blog last week by Chris Holdgraf).

The so-called brain-machine interface (BMI) technology has not yet been perfected to the point that we need to worry about hackers stealing our secrets or erasing our memories. But it has come far enough that researchers may soon be able to restore physical and sensory functionality to patients with immobilizing conditions such as paralysis and Parkinson’s Disease. Scientists at UC Berkeley and UCSF’s Center for Neural Engineering and Prostheses (CNEP) are among the pioneers in developing this sort of brain repair technology.

Managing scientific data: when bits need babysitters

If a tree falls in a forest, and a microphone picks it up and uploads the recording to an obscure archive where no one ever listens to it, does it make a sound? Philosophical matters aside, questions like this point to one of the central challenges facing the natural sciences in the information age. Thanks to improvements in data collection technologies over the years, scientific data in many fields is being generated at astronomically higher rates than in the past. Although this sounds like good news (and I doubt that the data-starved scientists of yesteryear would be complaining much), researchers often find themselves struggling to keep pace with the deluge of information. The question we must confront is how to design a system in which vast influxes of data can be efficiently accessed, vetted and analyzed by scientists around the globe.

Hydrogen production with disorder-engineered nanoparticles

One great example of nanomaterials that can address environmental problems is photocatalytic water splitting, which produces hydrogen gas through a chemical reaction that consumes only water and sunlight. This eco-friendly hydrogen can power zero-emissions fuel cells found in cars and a number of other emerging clean technologies. The goal is to replace conventional methods of manufacturing hydrogen, which generally consume fossil fuels and/or large amounts of electricity.

In photocatalysis, materials like titanium dioxide (TiO2) nanoparticles catalyze water splitting by absorbing light and transferring the light’s energy to nearby water molecules. In turn, the water breaks apart into its constituent elements, hydrogen and oxygen. Because of the absorption properties of TiO2, artificially generated ultraviolet light is required for the reaction to proceed efficiently. However, in a recent publication in Science, a group of Berkeley Lab researchers have shown that a slightly modified version of TiO2 nanoparticles can split water under natural sunlight.

Ultra-tough glass: bending without breaking

Many years ago, my parents and brother were driving home late at night, full speed on a highway, when a large rock thrown off an overpass struck their car’s windshield. There was a time when an impact like that would have shattered the windshield glass, likely leading to a tragic accident and – for me – a painful childhood. But, thanks to the modern miracle of laminated safety glass, the windshield did not shatter; it only cracked. The rock rolled away, my dad maintained control of the car, and the three of them got home safe and sound.

One of the lessons of that night is that in many applications, the mechanical strength of glass is every bit as important as its transparency. However, there’s a reason we don’t often see literal glass ceilings. The problem is that glass breaks before it bends – even the tiniest fracture spreads rapidly in all directions until the entire pane shatters. In engineering terms, glass is strong (it can withstand a lot of stress before cracking) but not tough (it has little damage tolerance after the onset of cracking). This is in contrast to sheets of metal or plastic that can deform to accommodate small defects, making them generally tougher materials.

To a group of researchers at Lawrence Berkeley National Laboratory and Caltech, this begged the question: is it possible to engineer tougher glass by making it behave more like metal? The answer, it turns out, is yes.

Nobel update: Was Cal short-changed?

As I pointed out a few weeks ago, neither of this year’s winners of the Nobel Prize in physics has a significant connection to UC Berkeley, but it turns out that may only be because the Nobel committee gave the prize to the wrong graphene researchers. At least one prominent physicist believes the Nobel committee took some serious shortcuts in their selection process that caused them to overlook the contributions of two former Cal scientists, Walter De Heer and Philip Kim. Do I smell a recount?


Molecular recess

It would be an understatement to say that molecular machines have been under a tremendous amount of pressure lately. Proponents of nanotechnology have left them variously responsible for curing the world’s diseases, providing mankind with limitless food, water, energy and information, and even self-assembling so we don’t have to make them ourselves. And that’s only a partial list. Under the weight of such towering expectations, can we really blame them if they give up and turn the planet into grey goo?

Perhaps in an effort to save us from such an apocalyptic scenario, some nanoscientists have set more leisurely intermediate goals for molecular machines, like getting them to play games. A group of researchers from Columbia University recently developed a two-player strategy game between a human player and DNA-based molecular computer called “tit-for-tat”.

Graphene research at Cal: Close, but no Nobel

Fans of the Nobel Prize in Physics know that this year’s honors went to a pair of U.K.-based researchers for the discovery of graphene, a.k.a., The World’s Thinnest Material. While neither winner has a significant connection to UC Berkeley (the last Cal professor to win the physics Nobel was George Smoot in 2006), many here in the physics department can rightly claim at least some stake in this year’s prize. That’s because graphene’s discovery in 2004 sparked a huge burst of high-impact research around the globe, much of which has been influenced by the work of Berkeley scientists.

Jump out of your skin and into your e-skin

Last time, I wrote about the reverse-engineering of natural processes to develop more efficient solar cells. It turns out that photovoltaics research is not the only field being guided by nature. This month, the journal Nature Materials published two reports describing a pair of successful attempts to fabricate artificial skin – flexible, stretchable arrays of highly sensitive pressure sensors that produce electrical signals in response to contact. The so-called “e-skin” can be used in applications such as robotics and manufacturing to provide a softer touch during manipulation delicate objects.


Going green… literally

While impressive, the last few decades of human achievement in photovoltaics pale in comparison to nature’s equivalent technology: photosynthesis. Just look at the numbers—every year photosynthesis produces about 3,000 exajoules (EJ) of chemical energy, or 7 x 1017 kilocalories, which equates to about half the total energy stored in the world’s petroleum reserves (and approximately the average daily caloric intake of eating champ Joey Chestnut). Compare this to the 0.1 EJ of electrical energy produced annually by man-made photovoltaics. Closing this gap is the key to a sustainable energy future, and unlike nature we don’t have the luxury of waiting billions of years to get there.


Coming soon to Berkeley: Energy efficient electronics using nanotechnology

To date, nanotechnology has generated lots of excitement in the scientific community, but it hasn’t exactly brought about transformative changes in the life of the average person. In an effort turn that corner, UC Berkeley recently established the Center for Energy Efficient Electronics Science (E3S), where researchers will work to develop a new generation of nanotechnology-based computer chips that require so little energy that they may never need to be plugged in or recharged. The applications of this research go far beyond improving your laptop’s battery life (important as that might be) and into the realm of making entirely new technologies viable, like biosensors and ubiquitous wireless networks. E3S is led by EECS professor Eli Yablonovitch, whose long list of honors includes his very own Wikipedia entry.