Posts byDaniel Freeman

The (Im)permanence of Memory

“Memories are one-way communication with the future.” -Scott Aaronson (probably) When you first start learning about physics, you’re taught a great deal of things concerning objects at rest and in motion—balls and springs, wheels and pulleys. You learn equations and techniques for accurately describing how these simple systems evolve in time. A bit later, you

You really might want to take a look at neural networks

I’m edging into “shill” territory, but the recent rapid progress in neural networks is really worth elaborating upon: By now, you’re probably aware that Google’s DeepMind developed a Go algorithm that defeated a professional player in a widely publicized match.  You might not know that DeepMind’s paper describing their algorithm was accepted at Nature, and

Highlights from the Breakthrough Prize Symposium

Last week, as previously advertised, UC Berkeley hosted the Breakthrough Prize symposium, showcasing the research of winners present and previous of the award.  If you missed the symposium, all of the lectures are available on youtube. Symposium Highlights by Daniel Freeman Having not yet mastered the ability to place myself in superposition across the three

Neural networks. Neural networks? Neural networks!

Google has received a flurry of media attention recently for their psychedelic image generation technique via neural networks (called inceptionism).  A handful of fascinating links have sprouted up concerning neural network techniques, so I’ve collated a handful here with some commentary.   1. As always, wikipedia is the best place to start.  The specialized articles

Choice articles from the arXiv

The open access movement has gotten a fair amount of attention in recent years due to everyone realizing how much of a cartel the traditional publication model actually is.  This uproar actually recently resulted in the government making it such that all publicly funded research must be made freely available to anyone that wants to

Stackexchange pro clicks

There’s this joke involving an engineer, a physicist, and a mathematician trying to solve a problem–it’s sort of overtold–and the punch line is something like engineers are practical, physicists are lazy, and/or mathematicians are clever depending on whether an engineer, physicist, or mathematician is telling the joke.  I’m not really interested in retelling the joke. 

The Unreasonable Effectiveness of the Ising Model – Part 2

“I won’t say was the world’s worst lecturer, but he was certainly in contention.”

-Robert Cole

Lars Onsager, probably mid-giggle.Before I came to Berkeley, I had actually never heard of Lars Onsager, which truly is a shame. His life overflows with intellectual achievement. I would say it culminated in his Nobel Prize in 1968, but the award was for work he did essentially in graduate school (except he didn’t formally ever go to graduate school—the story of his Ph.D. is worth reading), 37 years previous. In the intervening years, his mastery of mathematics, physics, and chemistry brought about foundational breakthroughs in the study of statistical mechanics.

Part of his relative obscurity to non-scientists probably owes to his particularly eccentric nature. He didn’t have the flashiness and genius-everyman aesthetic of a Richard Feynman, nor did he have the iconography and mystery of an Albert Einstein. In fact, his eccentricity derives largely from an almost pathological refusal to present his ideas clearly—he famously wrote down a formula for the solution of an outstanding problem on the board at a conference, leaving the derivation of his formula for the scientific community to figure out themselves (which took a year). He eventually released a sketch of his ideas…twenty years later.

The Unreasonable Effectiveness of the Ising Model

(part 1 of 2 really this time I swear)

The wild eyed and white haired physicist archetype sits on your television screen—the backdrop, a rapidly revolving Milky Way animation—gesticulating with his hands as if molding the very fiber of the universe, “Physics, you see, is about the symmetry!  It’s all about symmetry!” he exclaims.

Morgan Freeman kindly translates as platonic solids dance around the screen: “The universe is full of symmetry.”

Screen Shot 2014-01-06 at 11.01.24 AMScience has a handful of words that are often coopted by popularizers to grab attention and to lend credibility-by-association to their work.  “Coherence” is one of these words, as well as the wildly popular “Quantum” (God forbid you be reading about quantum coherences).

The word “symmetry” is a great go-to because it immediately evokes strong visuals.  Lots of things are symmetrical!  But this isn’t what makes symmetry profound.  As one might expect, these buzzwords do, in fact, have highly technical definitions (sometimes several!).

It should come as no surprise, then, that there are many sorts of ‘symmetry’ that often come up in popular physics.  The primary case, which involves the criminally undertold story of Emmy Noether and her study of mathematical objects called Lagrangians, requires a fair amount of mathematical background, and is nonetheless given a surprisingly decent treatment in popularizations reminiscent of the caricature of the introduction.

The most intuitive sort of symmetry—the sort that requires the least mathematical tomfoolery to understand—comes from the study of spontaneous symmetry breaking or, more generally, phase transitions.

When I was younger, after learning about solids, liquids, and gasses, I wondered why matter happened to like these three particular states.  It’s not particularly intuitive—why not a smooth transition from a dense, hard solid to a more jelly-like intermediate phase before becoming a liquid?  Why do liquids suddenly become completely gaseous and not just get less and less dense?  And Jello is just really confusing to a child.

Sort of like a light saber, except not really at all like a light saber

lightsaberIf it weren’t for Mikhail Lukin and I sharing the same alma mater, and he being an outrageously accomplished and generally excellent scientist, I might otherwise dwell on his shameless misappropriation of a Star Wars analogy to describe a recent experiment of his to the popular press (notwithstanding the almost unreasonable effectiveness of that analogy in drawing attention to his work).

But, alas, the experiment is just plain awesome.

Backing up a bit—in your first chemistry class, you were probably exposed to the Bohr model of the atom, wherein electrons dance around nuclei in nice little planetary orbits.

Then, if you learned any more chemistry or physics, you learned how approximately nothing about the Bohr model of the atom was actually true, that it’s sort of miraculous that it gives sensible answers at all, and that electrons actually contort themselves into all manner of lobed and lumpy shapes when flitting about around nuclei.

Then, if you learned quite a bit more chemistry and physics, you might have heard about Rydberg atoms, which, ironically, actually do resemble Bohr’s picture of atoms.  Essentially, you take an atom and excite its outermost electron so much that the electron is very nearly free.  This causes the electron to swing far from its nucleus, but more importantly, endows any such atom with an extremely large dipole moment.

Now, take an intensely cold gas of Rubidium atoms and a couple of finely tuned lasers.  Tune these lasers such that one might excite any of the Rubidium atoms to some slightly excited state—say a 5p state, if you remember your “s,p,d,f”s–and tune the other laser to make up the difference up to the Rydberg state. The net result is that one photon from each laser could bring any of the Rubidium atoms from its cool, ground state, up to the extremely excited Rydberg state (but that absorption must occur in that order, that is, the Rubidium atom must first be excited to the 5p state, and then excited to the Rydberg state).  Also stipulate that the first laser—the laser exciting to the 5p state—is much weaker than the second laser.

If you do this, and I was forced to describe what occurs with a light saber analogy, the results would probably be best described as a light saber fight:

Behind the Science: Infinite Russian cats: part 3 of several

Schrodinger’s Cat has the pleasure of being one of a distinguished handful of particularly viral physics memes, due in large part, no doubt, to its immediate association with the internet-smallpox that is The Cat.

But the point of Schrodinger’s eponymous thought experiment in the collective unconscious is sometimes lost—those that know of it remember that it has something to do with cats being alive and dead at the same time.  Perhaps those more familiar remember something about a radioactive particle decaying, and maybe even remember a cyanide capsule being involved.  But most importantly, just about everyone knows it has to do with Quantum Mechanics.

Matrices are a thing that people use

When I was in the ninth grade, I was taught how to find the determinant of a matrix.  For the uninitiated—or perhaps the spared—this procedure involves writing a bunch of numbers in rows and columns, and then meticulously multiply, adding, and subtracting in a very specific order.  The larger the matrix, the more tedious this calculation becomes, and the more likely you are to make a mistake.

I didn’t really learn what a determinant was until my second year of college.  In fact, I wasn’t really sure what the purpose of a matrix was until college, either.  I knew they vaguely had something to do with solving systems of equations (another nightmare of a problem to do by hand).

I did not realize that they are actually one of the most useful tools in all of the applied sciences.

This dichotomy stems from the very purpose of matrices clashing with how they were introduced to me: the reason we write data in matrices—the reason we define these strange multiplication rules—the reason we have these algorithms for finding determinants—is because it is convenient.

“But!” interjects Suzie, “You just finished complaining about how annoying it was to do anything with matrices by hand.”

That’s because no one in their right mind does anything involving individual matrix entries by hand unless they absolutely have to.  Matrices, and the more overarching formalism surrounding their theory—Linear Algebra—are extremely powerful tools that allow the efficient analysis of abstract objects called vector spaces.  The convenience of using matrices is inextricably tied to the fact that so many physical systems are vector spaces.  Essentially all of the intuition for what defines a vector space is in this image:


D-Wave drama


It’s a special rite of passage as an academic to see your field, your passion — that thing to which you devote the best of your twenties — steeped in controversy.  Some disciplines, of course, feel this particularly more harshly than others.

One would think that my field, quantum information, would suffer less from this problem considering its status as a niche, speculative technology that’s been in slow but steady development for the better part of three decades.

But one would be wrong.

Not a prime pun

Photo Credit Michael Brown via Flikr

Primes, those numbers (as some middle school teacher probably tried to make you memorize some years ago) whose only factors are themselves and one (2, 3, 5, 7, 11 etc.), recently made the news.  This is usually a big deal, because the low hanging fruit in the number theory of primes, like most mathematical objects under study for at least a couple thousand years, was consumed long ago.

It’s been known since at least Euclid that there are infinitely many primes (for an accessible proof why, see here).  More tantalizing has been the conjecture that there are infinitely many so-called twin primes—consecutive prime numbers spaced only two apart from one another (3 and 5, 5 and 7, 11 and 13, 17 and 19 etc.).  As is often the case with conjectures of this sort in number theory, computers continuously search for larger and larger candidate pairs satisfying this simple yet maddeningly elusive condition, for while every once in a while a new twin prime pair is found (the largest being the 18,000 digit long monster 3756801695685 · 2666669 ± 1), no one has yet proved that there should be infinitely many.

Behind the Science: Infinite Russian cats–part 2a of 3

What does it mean that, “My computer can do what your computer can do?”  My computer can play Crysis, and yours probably can’t, but is that really a meaningful distinction?  My computer has a bunch of ports and things on it—USB, HDMI, etc.—and there’s probably a bit of overlap between my computer and yours there.  But my computer could be missing most, if not all of its ports, and it could definitely still be said to compute.

You might have a Mac, and I have a PC, but again, that seems like a superficial difference—while there may be certain programs that only run on PCs and not Macs, it’s not like a similar sort of program couldn’t be written for a Mac that did the same thing.

We could play this little reductionist game all day—winding down past the processor of the computer, comparing flip-flops and NAND gates, past transistors, down to the electrons themselves—those tireless carriers of the binary alphabet of our information age.

But what’s the difference between electrons carrying out computations versus me carrying out a computation?  After all, given enough time, I (or perhaps, Suzie) could manually reproduce anything either my or your computer is capable of computing.  We’d just have to follow the correct rules (and possess a sufficient amount of paper…and pencil lead).

Thought experiment friday: Newcomb’s problem


“To almost everyone, it is perfectly clear and obvious what should be done.  The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.”

-Robert Nozick

Suzie wants to play a game.

You hate playing games with Suzie, because Suzie always wins.  In fact, the last thousand or so times Suzie played this game, not only did she beat you, but she beat everyone else she played (it has occurred to you that Suzie may not be human).

Despite your protestations, Suzie plops two boxes in front of you, one red, one blue, and runs away, cackling.  It’s too late.  You’re already playing Suzie’s game.

Behind the Science: Infinite Russian cats: part 1 of 3

Only marginally related to the following.

This series is devoted to some of the underappreciated and misunderstood limits of human reason.  Interspersed throughout are more Wikipedia articles than any normal person has time to read in a day—consider them an optional and often superior companion to my presentation of the ideas herein.

Infinity is the trump card of childhood argument—an unstoppable arithmetical power play in encounters of escalating scale.  Only when your friend Suzie had the gall to quip, “Nuh uh, I have infinity-plus-one dollars!” did you falter.

“That’s against the rules” you stutter.

But it’s too late.  Suzie is already running down the hall with your lunch money and calculator, having unquestionably Won the exchange (to any onlooker), and Quashed your pride (and that of your House).

Much like Star Wars, I don’t actually remember when I first heard about the idea of Infinity.  Upon first learning about numbers, the specter of some always bigger thing suggested itself, just out of reach.  I remember distinct pride in mentally counting to one thousand on a plane flight to Disney World when I was younger (in hindsight, I may have skipped the entirety of the 700s (I was a strange child)).  But the infinite, as many a Math teacher will remind, isn’t really a number.  At least, it isn’t the sort of number you’re used to.  (As an aside, it’s worth pointing out that, should you find yourself in a properly refereed duel of Large Numbered wits with Suzie in the future, Scott Aaronson offers an excellent strategy).  Even among educated scientists, Infinity is little more than that thing which makes integrals either a lot easier or lot harder to evaluate.

Lossless, lossy, or lost

A stone's throw from my office, the beach alongside the UCSB campus beckons for me to come frolic in its waters.

“The problem is communication.  Too much communication!”

-Homer J. Simpson

Nestled between the roiling sea spray of the Pacific and civilization, the UC-Santa Barbara campus evokes the ambiance of a scientific resort—tiled roofing and faux-adobe aesthetic abound.  I recently returned from a brief sojourn to the Kavli Institute for Theoretical Physics at UC-SB for a conference, wherein I had the opportunity to attend a large number of academic talks.

The conference presenter has a herculean task.  In some bounded number of minutes, the speaker has to win an audience’s attention, whilst simultaneously explaining their work to an acceptable level of detail in suitably general language.  In an ideal world, the months of effort that goes into any given figure—the hours of graduate students slaving over instruments, the machines working, and not working, working again, and then irreparably imploding—could be distilled and aligned, font chosen, and plot markers set in such a way that the onlooker, mid-coffee gulp, could be struck by alternating waves of fear, joy, pain, and halcyon understanding upon taking sight of the presenter’s effort.  I would call this lossless communication, and in describing it as I have, recognize its idealistic impossibility.