Sunday, October 28, 2012

Physics Exam Tips

A while back, i discovered that almost none of the undergrads in my local Chi Alpha knew about a large amount of education research that i consider pretty basic.  Combing over advice emails i had sent to various students, i assembled a list of things i think every student entering a technical field should know.  This was first published by Glen Davis at http://glenandpaula.com/wordpress/.  I'm moving it here so i can get at it more easily when i have students again.

  1. From the first day of class, sit in the front of the room toward the center. At least one study has shown that students who sit in the front are 2–3 times more likely to get an A and 6 times less likely to fail than students sitting in the back even when seats are randomly assigned on the first day of class. We can debate why this is so all day, but it is so, so take advantage of it. (By ‘the front’ i mean the first ten or so rows of Hewlett 200.)
  2. Be sure to get plenty of sleep the two nights before the exam. Of all the bad conditions you could be in going into a physics test, being tired is probably the worst one that is legal. Studies indicate that the second night before the test is even more important than the night immediately before. A clear, thinking, creative mind is your single greatest asset for any physics you might encounter. If you have been keeping up with the class, getting two full nights of sleep is probably more important than any amount of studying you might do during those two days.
  3. That said you will probably want to do some studying. If you haven’t already, I highly recommend finding someone else in the class to study with. Go over problems together. Go into the later problems in each chapter and pick some that you’re not sure you can both do. Taking an exam well is very similar to teaching the grader how to do the problems, so even if you are teaching a friend how to do something you already know, you are preparing for the test. If you both (or all) get stuck on something, contact a TA.
  4. Read every problem at the beginning of the test. Your mind will continue to process problems you are not looking at, provided it is awake. (See Tip 2) Studies show that you are best served loading all the questions into your brain at the start to give yourself maximum time to contemplate. If you get really stuck on a problem, leave plenty of space and move on. Odds are you’ll have better insight when you come back to it.
  5. DON’T PANIC. Attempt every question. This sounds really obvious, but we occasionally get blue books that have a few scribbles labeled ‘Problem 1′ and nothing else. As best we can tell, these students are looking at the first question, panicking and staring blankly at the paper for forty-five minutes or just walking out. This is something worth practicing to avoid. If you find yourself in a panic: stop, look away from the paper while slowly counting to ten. If you are feeling calm, you can go back and draw a diagram or write down some possibly relevant equations. If you start panicking again, repeat Steps 1 and 2. If you are not feeling calm, turn a couple pages and start the next question. Things will look better when you come back to this one. Trust me.
  6. Now for a few tips on getting the most [points] out of your graders. Grading a midterm takes 4–5 hours. As much as we try to assess each of you according to all the knowledge of physics you demonstrated, we are going to get tired and eventually parts of our brains are going to go on autopilot. If your answers are in clearly marked boxes (preferably near the left side of the page) and they are right, there is a reduced chance of any error in your work being marked off. If an answer is wrong, but it’s in a box near the left side of the page immediately below the work that produced it, then it is very easy for us to find the one little error and give you most of the points. I know having all the answers in one box at the bottom of the page feels concise, but if one of them is wrong we have no idea where on the page to look for the mistake. On a related note, it is better if you work one part of a problem and then work the next one below it. Believe it or not, grad students can get confused if part c is to the right of part b instead of below it. It’s silly, but after a few hours of grading that’s the way we are, so you might as well not let it hurt you. As a general rule, each line on the page should only have one equation or statement on it. (pictures excluded) You may use up more pages that way, but there’s no shortage of blue books.
  7. Whenever possible, draw a picture. Not only will it help you think, but it also helps us know what you were thinking. If you are not absolutely confident in your solution, a minute spent drawing a decent picture is probably worth it in terms of partial credit. Too often I’ve suspected a student knew more than their answer indicated, but they didn’t leave a good record of their thought process so I couldn’t grant partial credit. And that makes me sad. (Organizing graphics are also great antidotes to panic, see Tip 5.)
  8. When you get an answer, check that it makes sense. Negative lengths and times are often indicators that you’ve made a mistake, as are e.g. megaCoulomb charges and kiloAmp currents. If this happens to you, go look for the error and fix it. If you can’t find it, let us know that you don’t like the answer and why. One of the easiest ways to tell that someone is lost is if they give you a non-physical answer and don’t blink. As a physicist, it is much easier to grade leniently if a student indicates that they understand why the result of their calculation can’t be right. If nothing else, the grading rubric often has a point designated just for having a result that could be true. You’ll at least get that.
  9. It is well known that having good handwriting improves the attitude of those grading your exam. What is less well known is that having tiny handwriting can hurt you. Often what is perfectly legible to you while you are curled up with your nose 12 inches from the paper makes our eyes hurt after the third or fourth hour of grading. Obviously this vastly reduces the incentive to hunt for that tiny little math error you made in part a. This is not a small matter. I, for one, tend to get a migraine when I bend over small text for too long. So imagine a three hour migraine and then gauge the incentive to just mark you off so I can stop looking at your paper. Find a test that you have taken recently. If you (or better, a friend) can’t clearly read your text at arm’s length, you might consider consciously writing larger on all tests from now on. Grading fatigue isn’t limited to physics TAs.

Wednesday, October 17, 2012

Building A Model Of Global Warming

A good friend whose technical credentials i respect recently suggested to me that the reason all the climate scientists agree about global warming is that they are all running the same model that has never been independently developed more than once.  I mentioned this to another technically trained friend who said "Wait, you believe in global warming?"  Until this point, i have kind of figured that the greenhouse effect was probably real because i (mostly) trust the peer review process and it seems like most of the people with the expertise to do so are saying that it exists.  That's not enough anymore.  So here is my attempt to model the atmosphere using the good old physics standbys of rough simplification and convenient assumptions.

(This really is a log of my thoughts.  Its not the shortest path to the answer.  In fact it is much longer than i thought it would be.  Calculation 1 is...not my best work.  I tried to take a shortcut that cost me a couple of stupid assumptions and ended up not being any shorter.  But i found a lot of interesting things along the way so i left it intact.)

First some rules: As of this writing, i have absolutely no training in climate science or geophysics and i have not consulted with anyone in those fields.  I will build my model using whatever physics or chemistry seems best to me as i go along.  Since the goal is to get a non-expert opinion, i will not reference any text making any claim about climate change.  I will source all physical constants from WolframAlpha; if i need another source i will stick to standard reference texts and (inter)national standards offices and i will cite them.

Okay, here we go.  The first thing we need is a model for how the Earth heats up and cools down every day.  Heat comes from the Sun.  I've heard various solar luminosities quoted, but typing (Solar Luminosity)/(4*pi*(1 A.U.)^2) gives me 1368 W/m^2.  Radiation that gets blocked by the atmosphere contributes to the planet's heat load but not to luminosities quoted by ground-dwelling solar enthusiasts (who usually estimate 1 kW/m^2), so i'm going to use my number.  Since the Sun only illuminates a cross-section of the Earth, i type (1368 W/m^2)*pi*(Earth Radius)^2 and get 1.748x10^17 W or 174.8 petawatts as the incident solar energy.  Probably this is high since the Earth is partly reflective, but it shouldn't be radically off.

On the cooling side, i assume the Earth is a black-body radiator with a constant surface temperature.  Obviously the poles are colder, but hopefully this cancels out my assumption that they were black-body absorbers.  The power radiated per unit area for a black body is the Stefan-Boltzmann constant times the temperature to the fourth power.  The area in question is now the surface area of the Earth, 4*pi*R^2.  Typing ((174.833 PW)/((Stefan-Boltzmann Constant)*4*pi*(Earth Radius)^2))^(1/4) gives me 278.68 K, which is 42 °F, a bit chilly but certainly a common surface temperature.  Looking good so far.

Before going on, i need to know some wavelengths so i can get into a little chemistry.  The energy of a photon can be expressed as the Boltzmann constant (k) times its temperature (T) or as Planck's constant (h) times its frequency (f).  In addition, the frequency of any wave is its speed (c) over its wavelength (l).

E = kT = hf = hc/l rearranges to l = hc/kT.

A little math on the black-body intensity curve (formalized as Wien's Displacement Law) tells us that a black body radiates most of its power at 3-10 times its thermodynamic temperature with a peak at 5*T.  For a 280K planet, this makes the wavelengths of interest 5-17 microns with the peak at 10 microns.  This is indeed a good chunk of the middle-to-deep infrared zone.  Out of interest, i do the same thing with the surface temperature of the Sun (5780 K) and find that most of the power is in a band around 250-800 nm with the peak at 500 nm.  That's the entire visible spectrum plus a bit of the ultraviolet range.  Thus, sunsets are red, the daytime sky is blue and we have to wear sunscreen to block UV but we don't worry too much about solar x-rays.  That's a good cross-check.

With wavelengths in hand, we now need to see how carbon dioxide changes the picture.  This turns out to be tricky.  A spectrum taken by Dow Chemical Company in the 1960s (before anyone was thinking about global warming) and kindly digitized by NIST (National Institute of Standards and Technology) shows that CO2 has massive absorption peaks around 15 um and 4.3 um.  A 10 cm path through 200 mmHg of CO2 has an absorbance of 0.01 for the infrared light above 17 microns.  Between 5 and 13 microns the absorbance is more like 0.005 and right around the peak intensity of 10 microns, the absorbance appears to be zero.  This doesn't look good for global warming.  CO2 has some absorbance, but it gets progressively smaller the closer you get to the Earth's peak radiation.

Absorbance is kind of a weird unit, so it needs explanation.   For a monochromatic beam of light of intensity I_in traveling through a sample, Absorbance = LN(I_in/I_out).  The advantage of this arrangement is that if you double the thickness of the sample, the absorbance is doubled rather than having to square a transmission ratio.  Absorbance is always positive and higher values mean more blocked light in the way you would intuitively expect.

If the Dow Corning sample were made 1 meter square (but still 10 cm thick), it would contain (1 mole / 22.4 liters) * (200 mmHg / 1 atm) * (0.1 cubic meters) = 1.17 moles of CO2.  I want to find out how much CO2 would be needed to transmit 1/e of the light so i can calculate the optical depth of the atmosphere.  I_out/I_in = 1/e implies an absorbance value of 1.  So, for example, if 1.17 moles/m^2 has an absorbance of 0.01, the optical depth at that wavelength is 1.17/0.01 = 117 moles/m^2.  If the sample absorbance is 0.005, the optical depth is 1.17/0.005 = 234 moles/m^2.

So how much atmosphere is there and how much CO2 is in it?  The pressure at sea level is 14.7 psi.  Most of that is N2 gas at 28 g/mol.  Typing (1 atm) / (28 grams/mole) / (1 gee) gives me 369,000 moles/m^2.  (Aside: I love that we have standardized on 'gee' as the name for Earth gravity to distinguish it from grams in our shorthand.)  If the CO2 concentration were 1 part per million (ppm), we would have 0.37 moles/m^2 of CO2 and we wouldn't be worried.  The National Oceanic and Atmospheric Administration (NOAA) says that the concentration in recent years is around 390 ppm with some seasonal variation.  (The earliest record is 315 ppm in 1960).  This gives us about 144 moles/m^2 of CO2 over our heads (or 116.5 moles/m^2 in 1960), which is about one optical depth for deep IR radiation.  Interesting...

*************************************(Calculation 1: Playing Around)*************************************

I need to stop and make a model of the atmosphere as a whole.  Oh wait, no i don't! The International Organization for Standardization (ISO) publishes an International Standard Atmosphere (it isn't free, but many free calculators are available online).  To first order, i will assume that blocking more infrared radiation warms up the entire atmosphere evenly without changing the gradients between layers.  This obviously isn't true for large temperature changes, but if this calculation ends in even a 10% change we have much bigger problems.

Since we have some numbers handy, let's test the global warming hypothesis by comparing 1960 with now (~2010).  On average, radiation emitted half an optical depth or less away from space escapes.  Anything emitted from deeper in the atmosphere is blocked.  Adding more CO2 moves the emission layer upward (although not uniformly for all wavelengths).  The density profile of the atmosphere ensures that the ground is always much warmer than the stratosphere.  As radiation is effectively emitted from higher up, the ground will get warmer.  Or at least that's a theory.  Let's try that on with some numbers.

I'll work my way up the energy scale, starting from the deep infrared where the optical depth is 117 moles/m^2.  Half of 117 is 58.5 moles /m^2.  In 1960, deep infrared radiation could penetrate 58.5/116.5 or 50% of our atmosphere.  Since atmospheric pressure is determined by the weight of the air overhead, i will look for a height where the pressure is 0.50 atm.  This turns out to be 5500 meters above sea level.  The calculator tells me that the temperature at this altitude is 252.4 K.  In 2010, the radiation point was at 58.5/144 = 0.406 atm.  This occurs at 7000 meters where the temperature is 242.65 K.  If this were the absorbance across the entire spectrum, each of these levels would get fixed at 278.7 K in their respective years with everything else warming up to match.  In this case the Earth's surface would have warmed by 9.8 °C in 50 years.  The temperature in 1960 would be 312.4 K = 102.6 °F, so its a good thing this isn't the case.

Oddly, when i consider the region of lower absorbance (depth = 234 moles/m^2), i get roughly the same result.  Now the 1960 radiation surface is at sea level (288.15 K on the atmosphere calculator) and the  2010 radiation surface is at 0.812 atm -> 1720 meters -> 277.0 K for a temperature increase of 11.1 °C.  So let me deal with the absorption peaks.  Assuming a sample absorptivity of 0.2, the optical depth is 1.17/0.2 = 5.85 moles/m^2.  This means i'm looking for the temperature at pressures of 0.05 atm in 1960 (20500 m -> 217.15 K) and .0406 atm in 2010 (21810 m -> 218.45 K) for a difference of -1.3 °C.  Bizarrely, above 18000 meters (216 K), the temperature starts to rise again.  The peaks are actually much more absorptive than this, and the scaling down to a surface temperature doesn't work with an inversion.  These regions are going to have a small and unclear contribution anyway, so i'm ignoring them for the rest of this calculation.

This leaves me with some wavelengths with minor absorptivity which would generate a 10 °C difference in the surface temperature and some wavelengths with no absorptivity where adding more CO2 has no effect on the apparent surface temperature.  How do i weight them?  Since we're really worried about total power emitted, i'll weight each region by the integral of the Planck black body intensity spectrum over the frequencies they represent (Wikipedia "Black Body Radiation" for more details, but i'm treating this as common knowledge)  A few minutes with Wolfram Alpha* tells me that for a 280 K planet, 40.0% of the power is radiated at >16.5 microns, 11.4% in the first absorption peak at 14-16.5 microns, 17.6% in the 'greenhouse' region at 11-14 microns, 19.0% in the 'transparent' region at 8-11 microns and 11.6% in the 'greenhouse' region at 4.5-8 microns.  That adds up to 99.6%, so i ignore the wavelengths below 4.5 microns.  (One possible flaw: the Dow Corning spectrum only goes up to 22 microns; 23.5% of the power is radiated above that wavelength.)

*The actual formula to generate a weighting factor for a-b microns is:
(15/pi^4)*Integrate[x^3/(e^x-1),{x,50/b,50/a}]     
(50 microns = 280 K, 15/pi^4 makes the 0->inf integral equal to 1).

Even assuming the deep IR region remains a greenhouse region above 22 microns, i'm still going to assign it an additional weighting factor of (252.4/288.2)^3 = 0.672 because the radiation it emits is at a lower temperature.  The third power is used because that's the exponent for radiation per unit frequency.  So my weighting factors are now 0.4*0.67 = 0.269 for the deep IR region with a 9.8 °C increase, 0.176+0.116 = 0.302 for the near IR greenhouse region with a 11.1 °C increase and 0.19 for the transparent region with 0 °C increase for a combined weighted average of 7.87 °C increase over the past 50 years.  (Housekeeping: If we assume that CO2 is completely transparent to wavelengths above 22 microns, the above calculation gives a 7.36 °C increase)

*****************(Calculation 2: What I Should Have Done From The Beginning)******************

Wow! That's a huge effect.  In Ohio, 7.5 °C is the difference between a hard freeze that kills next year's pests and a ruined harvest.  I'm not really sure i want to believe a result that large that depends so heavily on a published standard atmosphere and hand-wavy assumptions.  Given the data i've already amassed, what if i do a straight power balance?

If the atmosphere is x optical depths thick in a certain range of wavelengths, then it transmits e^-x of the light in that range.  That means it absorbs 1-(e^-x) of the light.  That energy will be re-emitted fairly quickly.  If x is fairly small, half the re-emitted light will go upward to space and half will return to the surface.  So the additional power loading on the surface is (P_emitted)*(1/2)*(1-(e^-x)).  For large x (the absorption peak), i'm going to assume 2/3 of the energy returns to the surface.  One reason for this is that if i divide the atmosphere into many opaque layers and model the radiation between them, each layer sends 1/2 * 1/2 = 1/4 of the radiation from the previous layer back to the surface.  1/2+1/8+1/32+1/128+... = 2/3.  Alternatively, i could just assume that a very opaque atmosphere radiates from its coldest point, which the ISA tells me is 216 K.  Turns out (216/288)^4 = 0.316 ~= 1/3.

I can already see that this method is going to lead to a very warm planet even in 1960.  Up to now, i haven't accounted for anything that reflects visible light back into space which would reduce the overall power that needs to be radiated away as IR.  The technical term for diffusive reflection is albedo, but i can't find any references to Earth's albedo that don't ultimately link back to journals about climate science.  (Astronomers also use albedo to describe planets, but any measurement of Earth's albedo from space quickly gets snapped up by climate journals, which violates the rules of this exercise.)  Since it doesn't affect the question of recent warming, i'm going to pick 0.3 as the Earth's average reflectivity (so 70% visible absorption), which is the most common value i see on the web.  One could imagine that changes in surface temperature could change the albedo, creating feedback effects.  Warmer oceans mean more reflective clouds, but less reflective ice.  Since these effects are very hard to model and presuppose the victory of the warming hypothesis, i will ignore them.

This leaves me with the equation:

(Stephan-Boltzmann)*T_surface ^4 = 0.7*P_incident/Area + (SB)*T^4 * Sum[*f*(1/2)*(1 - e^-x)]

where f is the fraction of energy emitted in bands with optical depth x.  Using the values found in Calculation 1, i get the sum as 0.260 in 1960 and 0.285 in 2010.

Terms In The Sum:
Deep IR                    f = 40%       x = 117/117 in 1960      x = 144/117 in 2010
Shallow IR                f = 29.2%    x = 58.5/117 in 1960     x =   72/117 in 2010
'Window' Region       f = 19%      x = 0 in 1960                 x = 0 in 2010
Absorption Peak       f = 11.4%    Use 2/3 in place of (1/2)*(1 - e^-x)

Plugging these back into the power balance equation (recall P_incident = 1.748x10^17 W and Area = 4*pi*R_Earth^2), i get a surface temperature  ((0.7*1.748x10^17 W/(4*pi*(Earth Radius)^2)) / ((Stephan-Boltzmann Constant)*(1-0.260)))^(1/4) of 274.8 K = 35.0 °F in 1960 and 277.2 K = 39.3 °F in 2010 for a net change of 2.4 °C or 4.3 °F in 50 years.  As far as i know, this is mostly in line with current claims by climate scientists.  My overall temperatures are a little low, but they're pretty close and i'm ignoring the greenhouse effect for water (which NIST has digitized here), methane and various hydrocarbons.  I'm also ignoring tidal action, volcanism and probably a whole host of minor heat sources.

(Housekeeping: If i assume that CO2 is transparent to IR above 22 microns (f = 16.5% for deep IR), i get Sum values of 0.186 in 1960 and 0.2015 in 2010.  This give temperatures of  268.3 K = 23.4 °F in 1960 and 269.6 K = 25.7 °F in 2010 for a change of 1.3 °C or 2.3 °F in 50 years.)

***********************************************************************************

I have a vague memory of seeing somewhere in the news a claim that the average surface temperature of the Earth has raised by 1-2 °C since we started recording it.  If you're reading this and you want to know the details, you should go talk to a climate scientist; they've spent decades on this while i've spent a few hours.  Thinking about other heat-movers, the biggest thing i've left out is convective cooling.  Since the vapor pressure of water changes rapidly with temperature, i could imagine convection currents increasing as the ground/sea warms up, carrying heat above the greenhouse absorbers.  That would have a stabilizing influence, but "convection bubble driven by warm moist air" is meteorologist-speak for "giant storm".  Right now, i'm in favor of anything that causes more rain to fall on the Midwest.  In the long run, i'm not convinced that dumping that much power into hurricanes and thunderstorms is a net gain.

If you think the climate scientists are all toeing a party line, please consider me as an outside adjudicator.  I am a politically and socially conservative, scientifically-trained Christian who can't possibly have been indoctrinated into any sort of grand cover-up because (1) i disagree with almost all of the political and moral statements made in the name of global warming and (2) i haven't been paying enough attention.  I believe the world will end when God is good and ready to end it and not a moment before.  However based on the above calculation, i believe that the science behind the greenhouse effect is sound.  We will not end the world, but we are changing it.

(Aside: I'm all for arguing over the implications of climate change.  For that, please address your complaint to the relevant politicians and activists.  It might be helpful to know how a 1 degree change affects various ecosystems, but i have no idea how to model that.  Probably its hard to separate the warming component from other variables which are more obviously human-driven like toxins and GMOs.)

I'm curious now about the 'anthropogenic' question.  Is the increase in CO2 man-made or natural in origin?  The total recorded increase in the carbon content of the atmosphere is (144-116.5 moles/m^2)*(12g/mole)*4*pi*(Earth Radius)^2 = 169 billion tons.  (The oxygen was already in the air so its mass shouldn't be counted.)  WolframAlpha helpfully tells me that this is "~1.7 x estimated mass of all oil produced since 1850 (upper limit)" although who knows where that information comes from.  Looking around the internet, sites like this one seem to agree that the average global oil consumption in the past 50 years is about 3 billion tons per year for a total of 150 billion tons.  (I can't find an authoritative source that isn't buried in government-speak, but the exercise is over so i can bend the rules)  A lot of petroleum ends up as plastics in landfills, and even the oil that gets burned sheds a little mass as water (which promptly rains out).

On the other hand, we are burning forests at a pretty spectacular rate, which might make up the difference.  Conservationists on the Internet seems to agree on "1.5 acres/second" as the current rate of rainforest destruction.  That works out to 4.6e7 = 46 million acres per year.  A search for timber yield suggests that lumber and paper companies are getting about 100 green tons per acre when they clear-cut U.S. forests.  (Unsurprisingly, no one is publishing how much money they're making by logging rainforest.)  As a rough guide, this suggests that 4-5 billion tons of rainforest are destroyed each year.  That's more carbon than shows up in the atmosphere, but one could imagine scenarios where most of the mass ends up buried or converted to lumber (which is ultimately land-filled) instead of burned.  Anyway, it looks to me like we're emitting carbon at a rate at least comparable to the observed increase.

Given the other unexpected coincidences that happened during this exercise, i'm not going to say that means we are definitely the cause of global warming.  Apparently plankton absorb about 10 billion tons of carbon from the air each year, then die and carry it to the bottom of the ocean.  Since the CO2 concentration continues to rise, there must be other sources that offset this.  So i'll say with some confidence that i think additional atmospheric CO2 is causing a global increase in temperature, and with somewhat less confidence that we are the cause of the additional CO2.
 
Well, that took longer than expected.  In the interest of scientific honesty, i'm going to post this before asking a climate scientist to evaluate it.   I think the blog format lets me comment on my own posts, so we'll see whether this model is even close to theirs.  If so, then yes the model can be independently derived, even by a bumbling physicist like me.

Tuesday, October 9, 2012

A Light In The Darkness

The very first hike i went on as a Boy Scout was to Daniel Boone National Forest in Kentucky.  We had scheduled a 12 mile course to take 5-6 hours.  It was a beautiful April day, cool and bright, a great day to go hiking.  Unfortunately, a blizzard back in February had brought a number of trees down across the path and then, on this first thaw of the year, melted all at once.  When we weren't hacking through branches we were building bridges across streams whose banks couldn't be reached with a long pole.  'Be prepared' is all well and good, but no one thought this hike would require machetes or hundreds of feet of rope.  Of course, it takes about 5 minutes of bridge planning before a bunch of bored 11-year-olds try to wade across.  It turns out streams have steep banks even when they're not visible.  To make matters worse, many of the trail markers were on downed trees or just washed away, making navigation dicey at times.

After 12 hours of this, we crested the last ridge and stopped to regroup.  With a mile to go and about half an hour of daylight left, we split into groups of kids who could travel at the same speed; when traveling at night, being lost isn't nearly so dangerous as being alone.  Naturally, the Senior Patrol Leader got stuck leading all the new kids to safety.  This guy was one of my childhood heroes, but he was having a rough night.  At the glacial pace of exhausted children, he guided us by fragments of white paint and gut instinct well into dusk.  But eventually the moonless night settled on us and there was just nothing to go by.  We should have been home hours ago; no one in our group had brought a flashlight.

But somebody had.  Off to one side of us, someone kept swinging a light right in our eyes.  The weird thing was, the light was moving around us much faster than we could walk, it only appeared intermittently and it didn't seem to be going in a straight line.  When you're trying to guide by night vision, a light in your face is more than a small annoyance and we learned a lot of new vocabulary from our SPL as he tried to avoid looking at it.  Finally when we couldn't even see each other, we formed a human chain and turned toward the light.  Even if the light-bearer was lost too, at least we wouldn't be alone.  After a very long time (~10 minutes), we emerged in a paved clearing.  There were three older boys there and a kid my age with a MagLite.  His dad had been the leader of the fastest group and they had reached the end point well before dark.  Only one car had been left at the trail-head, so the dads had driven back to base camp, leaving the boys with the light from the glove box and instructions: "Stay here and tell anyone who comes that we'll be back."

In other circumstances, the kid with the flashlight had no credentials as a guide.  He was a squirrelly little guy and this was his first hike as a Boy Scout too.  All he knew was that his dad was coming back here, he had a light and out in the miles of utter darkness all around him were a bunch of lost people.  So he turned on the light and swung it through the trees.  Sort of.  His attention span was pretty limited and the older boys kept yelling at him to stop wasting batteries.  No one came for a long time so he kept turning off the light until he got bored again.

As it turned out, almost no one heeded the light.  I learned much later that one of the dangers of navigating in a forest is that your sense of direction changes much faster than you realize.  If you don't have a distant fixed object to go by, it takes great skill to walk in a straight line just by looking at the trees.  And that's in broad daylight.  What is far more likely is that you will build for yourself a local reference frame with little bearing on reality and put much more confidence in it than is justified.  In those circumstances an external fixed reference that you refuse to acknowledge will appear to drift wildly around a shifting world that you have convinced yourself is stable.  Whatever path you walk, it is important to have a sense of the bigger geography around you so you can decide what ought to be the anchor point(s) of your personal world.  If the light of your world seems inconsistent, you should really think carefully about your local heading.

Eventually, of course, the convoy returned.  Trucks fanned out across the clearing and pointed their high beams into the woods.  Groups of boys appeared from every direction, soaked to the skin and nearly sleeping on their feet.  We didn't actually lose anyone that night, but we'd been lost for less than an hour.  The cost of ignoring the flashlight was a half mile or so of extra walking.  In the grand scheme of things, we failed at a very short navigational challenge and got off fairly light for being unprepared.  A lot of people who ignore their guiding light are nowhere to be found when rescue comes.

When the headlights turned on, it was obvious that that kind of power could only be emitted by a father or a ranger.  All sensible boys turned toward them, but found that they had wandered a long way from safety.  I wonder had the flashlight been more steady how many of us would have been waiting there when the dads returned.  Someday our Father will return with his high beams on.  In the meantime, we have all the problems that come with wielding a flashlight: limited energy, unfocused output, low visibility.  Holding up the light takes effort and there are people yelling at us to turn it off.  Even so, it is vitally important to keep the light turned on and pointed out into the darkness.  There are an awful lot of lost people out there.  Just try not to shine it in their eyes.

Monday, October 1, 2012

No Hidden Variables (Spins Are Not Coins)

I've been dealing with quantum mechanics for a while now and most of the time i feel like i'm pretty comfortable with the basics.  I know wave-functions are fundamentally different from particles even though they sometimes get used interchangeably.  I'm okay with bras and kets and using operators for things like position and momentum.  But every once in a while something reminds me that i'm still glossing over a lot of the weirdness in my head.

There's a famous argument between Einstein and Bohr about wave-functions.  Basically, Einstein argues that the Heisenberg Uncertainty Principle doesn't mean that particles don't have definite positions and momenta, just that we can't measure them.  (Search "Einstein-Podolsky-Rosen" or "EPR Paradox")  The consensus of most of the physics community is that Bohr successfully defended the indefinite nature of quantum mechanics through a series of thought experiments.  (Einstein's assumptions are referred to as 'local reality', giving Bohr's alternative the unhelpful moniker 'non-reality'.)  Thought experiments are all well and good if you already have your Nobel Prize, but a Prof. John Bell of CERN actually proposed a class of experiments to distinguish between a fundamentally classical world and a fundamentally quantum one.  Many of the experiments are quite subtle (and therefore easy for the layman to ignore), but one of them really bothers me.

Suppose you have a black box filled with spin-0 particles which occasionally decay into a pair of spin-1/2 particles.  Each pair will necessarily go in opposite directions (conservation of momentum) and have opposite spin (up or down, conservation of angular momentum).  Until they reach some outside observer, they are 'entangled' because the result of one spin measurement will depend on the result of the other.  This type of system has been constructed and indeed if a pair of observers agree what axis they are using before-hand they always get opposite results.  If Observer 2 rotates his axis by 90 degrees, his measurements are now completely uncorrelated to Observer 1's.  But that's okay; if you had two random vectors pointing in opposite directions, knowing the x-component of one would tell you nothing about the y-component of the other.

Where things get weird is when you let the measurement axes float around.  Suppose you do the same experiment, but for each decay Observer 1 uses a vertical measurement axis while Observer 2 randomly selects an axis which is either vertical or rotated by x or by 2x.  Later they compare their data, only then revealing the relative angle between their measurements.  If the angles are aligned, they get 100% correlation.  If misaligned slightly, the correlation is still large but not 100%.  If you imagine the spins as classical vectors whose orientation is merely unknown (all you can measure is 'up from horizontal' or 'down from horizontal') then the chance of a rotation by x degrees pushing the result through horizontal is x/180.  The chance for a rotation of 2x is 2x/180.  So if the results of measurements skewed by x are correlated 1-a of the time then measurements skewed by 2x will be correlated 1-2a of the time provided x is smaller than 45 degrees.  But if the spins are quantum spins, then you need to calculate the wave-function overlap, which works out to cos(x).  For small x, this makes the correlation for measurements skewed by x equal to 1-(1/2)*x^2 == 1-a, but for a skew of 2x, it is 1-(1/2)*(2x)^2 = 1-4a !  Doubling the angle decreases the correlation 4-fold.

To get how crazy this is, imagine you had a pair of very weighted coins which flipped heads 99% of the time.  If you flip one of them a bunch of times, the results are 99% correlated to 'all heads'.  Now you flip the pair of them a bunch of times and record how often they show the same side.  You would expect the correlation to be at least 98% since the combined incidence of 'not heads' is only 2%.  But if you're flipping quantum mechanical coins, you could get only 96% correlation.  They don't match each other more often than they collectively don't match some third reference flip.  This is a sign that they don't have a definite direction, even a hidden one, until you look at them.  Only the correlation can be measured.

The ideal of 'realism' (that uncertainty doesn't prevent unknowable variables from existing) is closely linked to the idea of 'counterfactual definiteness', the idea that an experiment you didn't perform still has a definite result.  In the case of the coins, this would mean checking that either coin matches 'heads' rather than checking that they match each other.  For quantum spins, counterfactual definiteness says that even though you didn't measure the entangled states along parallel axes, they would have had opposite spins if you had.  Non-reality says that question is meaningless, even though it can be answered in the abstract with no uncertainty.