Standard Model Lagrangian Density

Featured

Back in 1999, as a procrastination exercise while working on my Ph.D. in physics at UC Davis, I spent a couple hours on a Friday night writing out a fairly ineloquent form of the Standard Model Lagrangian density. I unpacked Appendix E in Diagrammatica by Nobel Laureate Martinus Veltman and complied it into one equation, making the pdf and LaTeX files accessible on my old website.

Since that time, this form of the Standard Model Lagrangian density has received some attention in Symmetry Magazine, TED (Brian Cox, “CERN’s Supercollider”), Wikipedia, and PBS Space Time (“The Equation That Explains (Nearly) Everything!”), amongst other places including artwork (by James De Villiers). Recently, Don Lincoln at Fermilab has also highlighted it on his popular YouTube channel.

Since 2006, I’ve been a professor of physics at Cal Poly, San Luis Obispo and, since I haven’t been at UC Davis for a while, and don’t have easy edit access to that content on my old site, I’m making the files available here on my personal page. This includes some long overdue corrections. Only the pdfs are available below, but I will make the LaTeX file available soon. Thanks to the many people who have contacted me over the past 25 years to provide feedback and discussion!

Part II: Mass of the Death Star in Episode IV

This is the second in a two part post where I calculate the size and mass respectively of the Death Star in Episode IV (DS1).  Estimating the mass will inform discussion about the power source of the station and other energy considerations.

Part II: Mass of DS1

As argued in Part I, I assert that the diameter of DS1 is approximately 60 km based on a self-consistent scale analysis of the station plan schematics as shown during the briefing prior to the Battle of Yavin.

A “realistic” upper limit for the mass is set if the 60 km volume of DS1 was filled with the densest (real, stable) element currently known.  This is osmium with a mass density of 2.2E4 kilograms per cubic meter.  This places the mass at 2.5E18 kg with a surface gravity of 0.05g.  A filling fraction of 10% would then place a “realistic” estimate of the upper limit at 2.5E17 kg.  Other analyses have made similar assessments using futuristic materials with some volume filling-fraction, also putting the mass somewhere around 10^18 kg assuming a radius of 160 km.

In this mass analysis, using information from the available footage from the Battle of Yavin, I find a DS1 mass of roughly 2.8E23 kg, about million times the mass of a “realistic” approximation  Any supporting superstructure would be a small perturbation on this number.  This implies a surface gravity of an astounding 448g.  To account for this, my conclusion is that DS1 has a 40 m radius sphere of (contained) quark-gluon plasma or a 55 m radius quantity of neutronium at its core.  Such materials, if converted to useful energy with an efficiency of 0.1%, would be ample to 1) provide the 2.21E32 J/shot of energy required to destroy a planet as well as 2) serve as a power source for sub-light propulsion.

Details

The approach here uses the information available in the schematics shown during the briefing.  The briefing displays a simulation of the battle along the trench to the exhaust port.  Again, as shown in Part I of this post, the simulation scale is self-consistent with other scales in both the schematic and the actual battle footage.  As shown in Figure 1, the proton torpedo is launched into projectile motion only under the influence of gravity.  It appears to be at rest with respect to the x-wing as it climbs at an angle of about 25 degrees.

Figure 1

Figure 2

From the previous scale analysis in Part I, the distance from the port, d, and height, h, above the the port can be estimated.  They are approximately equal, h = d = 21 meters. The length of the x-wing is L = 12.5 m.  After deployment, the trajectory slightly rises and then falls into the exhaust port as shown in Figure 2.  A straightforward projectile motion calculation gives the formula for the necessary downward acceleration to follow the trajectory of an object under these conditions

a=\frac{2 V_{0}^2}{d}(\frac{h}{d}+\tan{\theta})\cos^2{\theta}\ \ \ \ (1)&s=3

Where t is the launch angle and Vo is the initial horizontal velocity of the projectile.  If we assume for simplicity that the angle \theta = 0 degrees and h = d, the formula simplifies to

a=\frac{2 V_{0}^2}{d}\ \ \ \ (2)&s=3.

From the surface gravity, the mass of can be obtained, assuming Newtonian gravity,

M=\frac{a R^2}{G}\ \ \ \ (3)&s=3.

Here G = 1.67E-11 Nm/kg, the gravitational constant.  For a bombing run, let’s assume the initial speed of the projectile to be the speed of the x-wing coming down the trench.  To estimate the speed, v, of the x-wing, information from the on-board battle computers is used.  In Part I, the length of the trench leading to the exhaust port was estimated to be about x = 4.7 kilometers.  On the battle computers, the number display coincidentally starts counting down from the range of about 47000 (units not displayed).  However, from this connection I will assume that the battle computers are measuring the distance to the launch point in decimeters.  From three battle computer approach edits, shown in Clip 1 below, and using the real time length of the different edits, the speed of an x-wing along the trench is estimated to be about 214 meters/second (481 miles/hour).  This is close to the cruising speed of a typical airliner — exceptionally fast given the operating conditions, but not unphysical.  This gives a realistic 22 seconds for an x-wing to travel down the trench on a bombing run.

Using this speed and the other information, this places the surface gravity of DS1 at about 448 g (where g is the acceleration due to gravity on the surface of the earth).  DS1 would have to have a corresponding mass of 2.4E23 kg to be consistent with this.

However, it is clear that considerable liberty was taken in the above analysis and perhaps too much credibility was given to the battle simulation alone, which does not entirely match the dynamics show in the footage of the battle. Upon inspection of the footage, the proton torpedoes are clearly launched with thrust of their own at a speed greater than that of the x-wing.  A reasonable estimate might put v (torpedo) to be roughly twice the cruising speed of the x-wing.  Moreover, the torpedoes are obviously not launched a mere d = 21 meters from the port (although h = 21 is plausible), rather sufficiently far such that the port is just out of sight in the clip.  Finally, the torpedoes enter the port at an awkward angle and appear to be “sucked in.”  One might argue that there could be a heat seeking capability in the torpedo.  However, this seems unlikely.  If this were the case, then it greatly dilutes the narrative of the battle, which strongly indicates not only that the shot was very difficult but that it required the power of the Force to really be successful.  Clearly, “heat seeking missiles along with the power of the Force” is a less satisfying message.  Indeed, some have speculated that the shot could only have been made by Space Wizards.  These scenarios, and other realistic permutations, are in tension with the simulation shown in the briefing.  Based on different adjustments of the parameters v (torpedo), h, d, and th, one can tune the value of the surface gravity and mass to be just about anything.

However, if we attempt to be consistent with the battle footage, we might assume again that t=0 degrees while d = 210 m, and v (torpedo) = 2 v (x-wing) for propulsion.  The speed of the x-wing can remain the same as before at 214 m/s.  Even with this, the surface gravity will be 18g.  This still leads to a mass over 10000 times larger than the mass of a realistic superstructure.  In this case, a ball of neutronium 18 m in radius could still be contained in the center to account for this mass.

Nevertheless, my analysis is based on the following premise: the simulation indicates that the rebel analysts at least believed, based on the best information available, that a dead drop of a proton torpedo into the port, only under the influence of DS1’s gravity, was at least possible at d = h = 21 meters at the cruising speed of an x-wing flying along the trench under fire nap-of-the-earth.  Any dynamics that occurred in real time under battle conditions would ultimately need to be consistent with this.

The large intrinsic surface acceleration may seem problematic (consider tidal forces or other substantial technological complications).  However, as demonstrated repeatedly in the Star Wars universe, there already exists exquisite technology to manipulate gravity and create the appropriate artificial gravity conditions to accommodate human activities (e.g. within DS1, the x-wings, etc.) under a very wide range of activities (e.g. acceleration to hyperspace, rapid maneuvering of spacecraft, artificial gravity within spacecraft at arbitrary angles, etc.).

 

Implications for such a large mass.  

One hypothesis that would explain such a large mass would be to assume DS1 had, at its core, a substantial quantity of localized neutrinoium or quark-gluon plasma contained as an energy source.  Such a source with high energy density could be used for the purposes of powering a weapon capable of destroying a planet, as an energy source for propulsion, and other support activities.  For example, the destiny of neutronium is about 4E17 kilograms per cubic meter and a quark-gluon plasma is about 1E18 kilograms per cubic meter.  Specifically, a contained sphere of neutronium at the center of the death star of radius 55 meters would account for the calculated mass and surface gravity of DS1.

It has been estimated that approximately 2.4E32 joules of energy would be required to destroy an earth-sized planet.  If 6.7 cubic meters of neutronium (e.g. a sphere of radius 1.88 m) could be converted to useful energy with an efficiency of 0.1%, this would be sufficient to destroy a planet (assuming the supporting technology was in place).  This is using the formula

\Delta E=\epsilon\Delta m c^2\ \ \ \ (4)&s=3

where \Delta E is the useful energy extracted from a mass \Delta m with efficiency \epsilon.  The mass is converted to a volume using the density of the material.

By using the work-energy theorem, the energy required to accelerate DS1 to an arbitrary speed can be estimated.  Assuming the possibility for relativistic motion, it can be shown (left as an exerise for the reader) that the volume V of fuel of density \rho required to accelerate an object of mass M to a light-speed fraction \beta at efficiency \epsilon is given by

V=\frac{1}{\sqrt{1-\beta^2}}\left(\frac{M}{\epsilon\rho}\right)\ \ \ \ (5)&s=3.

This does not account for the loss of mass as the fuel is used, so represents an upper limit.  For example, to accelerate DS1 with M = 2.4E23 kg from rest to 0.1% the speed of light (0.001 c) would require about 296 cubic meters of neutronium (a sphere of radius 4.1 m).

From this, one concludes that the propulsion system may be the largest energy consideration rather than the primary weapon.  For example, consider DS1 enters our solar system from hyperspace (whose energetics are not considered here) and found itself near the orbit of mars.  It would take two days for it to travel to earth at 0.001 c.

 

Part I: Size of the Death Star in Episode IV

This is the first in a two part post where I calculate the size and mass respectively of the Death Star in Episode IV (DS1).  At the end of Part II I will discuss thoughts about the energy source of DS1.

Part I: Size of DS1

Conventional wisdom from multiple sources places the size of DS1 to about 100-160 km in diameter.  Based on an analysis of the station’s plans acquired by the Rebels, I estimate that the diameter of DS1 is 60 kilometers, not 100 km to 160 km.  To bolster the case, this scale is compared to other scales for self-consistency, such as the width of the trench leading to the exhaust port in the Battle of Yavin. Part II of the post will focus on the mass of DS1 using related methods.

To estimate the size of DS1, I will begin with the given length scale of the exhaust port w = 2 m.  This information was provided in the briefing prior to the Battle of Yavin where the battle strategy and DS1 schematics are presented.  This scale, when applied to Figure 1, is consistent with the accepted length of an x-wing L = 12.5 m.  I assume that the x-wing has an equal wingspan (there does not seem to be consistent values available).  I am also assuming that the “small, one-man fighter” referred to in the briefing is an x-wing, not a y-wing.  The x-wing is a smaller, newer model than the y-wing and it is natural to take that as the template.  The self-consistent length scales of w and L will establish the length calibration for the rest of the analysis.

Figure 1: A close up view of the exhaust port chamber during final phase of the bombing run.  The port width is given as w = 2 m.  The length of the x-wing is L = 12.5 m.  The forward hole, of length l, is then determined to be about 10 m.

From this, I extract the length of the smaller forward hole in Figure 1 to be approximately l = 10 m.

Figure 2: As the plans zoom out, a larger view of the exhaust port chamber of width t = 186 m.  The first hole is shown with width l = 10 m.  The scale of width l was determined based on information in Figure 1.  The width of t was determined based on the scale of l.

Using l as a calibration, this establishes the exhaust port chamber in Figure 2 to be approximately t = 186 m.

In Figure 3a and Figure 3b, circles of different radii were overlaid on the battle plans until a good match for the radius was established.  Care was taken to have the sphere’s osculate the given curvature and to center the radial line down the exhaust conduit.  From here, the size of the exhaust port chamber, of width t, was used as a calibration to approximate the diameter of DS1 as D = 60 km (red).  Several other circles are show in Figure 3 to demonstrate that this estimation is sensible: 160 km (purple), 100 km (black), and 30 km (blue).  It is clear that a diameter of 160 km is definitely not consistent with station’s schematics.  A diameter of 100 km is not wildly off, but is clearly systematically large across the range over the given arc length.  30 km is clearly too small.

While a diameter of 60 km may seem modest in comparison to the previously estimated 100 km to 160 km range, an appropriately scaled image of New York City is overlaid in Figure 4 to illustrate the magnitude of this systems in real-world terms; even a sphere of 60 km (red) is an obscenely large space station, considering this is only the diameter — more than adequate to remain consistentwith existing canon.  The size of the main ring of the LHC (8.6 km) is overlaid in light blue, also for scale.

Figure 3a (to the right of the exhaust port chamber): As the plans zoom out further, the exhaust port chamber of width t = 186 m is shown with the curvature of DS1 (the square blob is the proton torpedo that has entered the port).  The scale of t was determined based on information in Figure 2.  Several circles with calibrated diameters based on the scales set in Figures 1 and 2 are shown.  The 60 km diameter circle in red is arguably the best match to the curvature.  Care was taken to match the point of contact of the circles to a common central location along the radial port.

Figure 3b (to the left of the exhaust port chamber): The same idea as Figure 3a.  The 60 km diameter is still arguably the best match, although is a little shy on this side. The 100 km diameter, the next best candidate, is shooting higher than the 60 km is shooting low. Since an exact mathematical fit wasn’t performed, the expected radius is probably a bit higher than 60 km, but significantly lower than 100 km.

 

 

Figure 4: A 60 km diameter circle in red (with yellow diameter indicator) shown overlaid on a Google Earth image of the greater New York City region.  The blue ring is an overlay of the scale of the Large Hadron Collider at CERN (about 8.5 km in diameter) — note the blue ring is not a scaled representation of the main weapon!  The main message here is that a 60 km station, although smaller than the accepted 100-150 km, is still freakin’ HUGE.  At this scale, there is only a rather modest indication of the massive urban infrastructure associated with New York City.

As another check on self-consistency, the diameter D is then used to calibrate the successive zooms on the station schematics, as shown in Figures 5 and 6.  The length B = 10 km is the width of the zoom patch from Figure 5, X = 4.7 km is the length of the trench run, and b = 134 m is the width of one trench sector. From Figure 6, the width of the trench is estimated to be b’ = 60 m, able to accommodate roughly five x-wing fighters lined wingtip-to-wingtip.  This indicates that the zoom factor is about 1000x in the briefing.

Figure 7 is a busy plot.  It overlays several accurately scaled images over the 60 m trench, shown with two parallel red lines, to reinforce plausibility.  Starting from the top: an airport runway with a 737 ready for takeoff (wingspan 34 m); a 100 m-wide yellow calibration line; a 60 m-wide yellow calibration line; the widths of an x-wing (green, Wx = 12.5 m, where I’ve assumed the wingspan is about the same as the length — there does not seem to be a consensus online; I’ve seen the value quoted to be 10.9 m, but it isn’t well-sourced) and tie fighter (red, 6.34 m); and a scaled image from footage of two x-wings flying in formation, with a yellow 60 m calibration line as well as a calibrated green arrow placed over the nearer one to indicate 12.5 m.  As predicted, about five x-wings could fit across based on the still image.  Also from this, the depth of the trench is estimated to also be 60 m.  The scales are all quite reasonable and consistent. It is worth noting that if the station were 100 km, the next possible sensible fit to the arc length in Figure 3, the width of the trench would be about 100 m, twice the current scale.  This would not be consistent with either the visuals from the battle footage or the airport runway scales.

In short, while there is certainly worthy critique of this work, I argue that, after a reasonably careful analysis of the stolen plans for DS1, all scales paint a self-consistent picture that the diameter of DS1 is very close to 60 km.

Figure 5: A zoom-out of DS1 in the briefing based on the stolen battle plans.  D = 60 km is the diameter and B = 10 km is the width of the patch in the region of interest near the exhaust port.

Figure 6: A zoom in in the region of interest patch near the exhaust port channel (see Figure 5) with B = 10 km.  the channel itself is about X = 4.7 km long.  The width of the channel is about b = 134 m.  Inset is a further zoom of the insertion point along the channel.  Width of the channel itself is about b’ = 60 m.

Figure 7: A zoom of the insertion point along the channel for the bombing run.  Several elements are overplayed for a sense of scale and for consistency comparisons.  The red parallel lines represent the left and right edges of the channel.  From the top of the figure is a 737 with a wing span of 34 m.  The 737 is on a runway (at SFO).  Down from the 737 is a  yellow line that represents 100 m.  This would be the width of the channel if D = 100 km, which is clearly much too large based on the battle plans.  The next horizontal yellow arrow is the 60 m width based on the scales assumed with D = 60 m.  Next down, embedded in the vertical lines of the runway: a green block representing the width of an x-wing and a red block representing the width of a tie fighter.  Finally, at the bottom is a shot from the battle footage.  It has been scaled so the edges of the walls match the width of the channel (shown as a horizontal yellow arrow).  The width of the near x-wing is shown with a green horizontal arrow, which matches the expected scale of an x-wing.

 

Teaching Philosphy

Force Density

I was recently promoted to full Professor of Physics at Cal Poly.  I joke that, nearly 50, I’ve finally grown up and got a real job.  This was roughly a thirty year project from starting my freshman year as a physics major at San Jose State in 1986, through my masters degree, through my Ph.D. at UC Davis, through three postdocs, through a tenure-track probationary period at Cal Poly, through a tenured Associate Professor position, until finally as a full Professor in the fall of 2016 at Cal Poly.

In my case for promotion, I had to submit a teaching philosophy, which I would like to share here.  The ideas in it are not new; I don’t claim to have invented them.  Moreover, they are not cited because, in some ways, they are rather ordinary, blending into the background mythology of teaching culture.  However, I feel that the particular personal way I have presented the ideas is perhaps worth sharing.  The essence can be summarized as this: “Like I began, I have applied my own teaching principles to my own journey in learning how to teach.”

Statement of Teaching Philosophy

In my nine years at Cal Poly, I feel I’ve grown as a teacher and mentor. However, this newfound wisdom also makes me question my own growth; I now know how little I know whereas, when I started, I thought I had it all figured out. As someone not formally educated in Education, here are some of the things I’ve learned.

I believe education is important, but its success and purpose are difficult to quantify.

I believe education is important, but its success and purpose are difficult to quantify.  In education, success and purpose can become a tautological exercise where one begins defining accomplishments in terms of what one is accomplishing. This is not unlike adding items you have already completed on a to-do list then immediately checking them off to feel productive. There are many sensible metrics of education effectiveness, but most are of a specialized nature or difficult to identify. Like Heisenberg’s celebrated Uncertainty Principle, it seems that the more specific a metric of one educational success is, the more uncertain is its ability to measure another aspect of success. In my own teaching and mentoring in physics, this abiguity has driven me to reflect on the purpose of our curriculum and focus on what we want to achieve and how to measure it. Nevertheless, I believe that education generates understanding of the world, removes ignorance, and allows us to face the future with courage and dignity.

Education generates understanding of the world, removes ignorance, and allows us to face the future with courage and dignity.

But setting aside abstract philosophies of education, my conclusion is that a successful education is one where a student discovers their own definition of success and develops the skills to pursue it.  My particular specialty is physics, but I’m also human. If I facilitate this process by providing some skills and focus though my physics and my humanity, both in and outside the classroom, then I have been a successful teacher and mentor. I have helped guide many students through the struggles of the technical, day-to-day details of coursework, mentored them as they find their career path, and consoled them in their struggle to find out who they are as a person.

A successful education is one where a student discovers their own definition of success and develops the skills to pursue it.

Helping physics students find their own definition of success and finding the corresponding skill set to accomplish this has been challenging for me. But it is a challenge I have committed my life to and embrace with aplomb. In physics, one of the biggest barriers to promoting student success is also one of its greatest strengths: physics is both very generalized and yet fundamental by nature. Physics involves scientific ideas spanning about 30 orders of magnitude in space and time – from the quantum world to the cosmos and everything in between. It is a daunting task to prioritize these ideas for undergraduates and generate practical skills while also imparting rigor, problem solving, and deep understanding of fundamental concepts about the nature of reality. In my own work, I continue to experiment with different teaching styles and techniques, but have settled into what might be called the “traditional method” of lecturing enthusiastically at a board with chalk, asking them questions in class, giving regular exams and quizzes with quick feedback, and being available to students in and out of class, either online or in person. This path has allowed me to optimize my own ability to convey to students my enthusiasm for physics and to coach them in a positive, constructive way through the learning process. Feedback from students indicate they genuinely appreciate this.

Being satisfied and fulfilled as an teacher is a critical part of the student’s success and learning process.

Being satisfied and fulfilled as an teacher is a critical part of the student’s success and learning process. An empowered instructor is one who feels they are making a difference. An instructor driven far outside their comfort zone will not facilitate student success. If an instructor’s enthusiasm is suppressed, both instructor and student will suffer. Nevertheless, a teacher should be flexible and encouraged to experiment with different teaching methods while innovating, but they should also settle into a style that is most comfortable for them without becoming complacent or without compromising intellectual integrity. Because I’ve found my comfort zone, this also creates a positive learning environment for the students. They trust me to guide them on the intellectual journey because of the friendly confidence I try to convey.

The future will always require good teachers to engage and inspire students face-to-face.

The future will always require good teachers to engage and inspire students face-to-face.  In my option, teaching and learning cannot be completely emulated with computer algorithms, online courses, or simply reading about a topic at home. Yes, all of those things can augment a learning experience but, until the invention of neural implants, which instantly inject knowledge, experience, and mastery directly into the human brain, interaction with a human teacher is necessary for deep learning. While I do add some elements of technology to my courses and mentoring, I try to learn everyone’s name and treat them as a coach would treat a team: we are all in it together and let’s try to win this game together. In this context, it gives me a chance to connect more with students inside and outside the classroom and give them very personalized feedback. Grades are not given as an authoritative effort to control, rather a genuine source of assessment that helps them improve their mastery. I aim to allow students to make mistakes and learn from them without feeling like they are a failure.

A teacher gives a student a foothold into a complex topic and helps them initiate the learning process.

A teacher gives a student a foothold into a complex topic and helps them initiate the learning process. A subject like physics is overwhelming – to try and learn it from scratch without guidance would be an intimidating undertaking. Without this foothold, developing skills in physics would be quite challenging. But, like with any project, it is best to master it in small, digestible chunks. The teacher is one who has made the journey through the material and can break the material into the right-sized pieces. I try to take the perspective of the student, remembering what is like not to know something, and then convey the concepts that allowed me to make the transition to an expert. I reenforce this by giving many content-rich homework and content-rich take home exams in addition to challenging in-class exams.

A teacher can facilitate learning and guide the process, but cannot be responsible for it.

A teacher can facilitate learning and guide the process, but cannot be responsible for it.  A topic cannot be mastered in a single course. It takes repeated exposure to a topic over many years to begin to develop a meaningful understanding of something new. Learning a topic is a complicated undertaking. How much one has learned may not be realized for weeks, months, or even years after being exposed to it. Sometimes learning happens actively and voluntarily, but it even happens passively and involuntarily. To ask students right after a course “how much did you learn” is a meaningless question. Most instructors learn new things about material they have been teaching for decades. Given the students are the least qualified to assess their own learning, how could students possibly know how much they learned after a course if they have no baseline to compare it to? The adage “the more you know, the more you know how little you know” applies here. Similarly, akin to the Dunning-Kruger effect, “the less you know, the less you know how little you know.” This latter effect tends to breed overconfidence. A good teacher gives students a sense of a bigger world of knowledge, generating some self doubt, but without squelching enthusiasm to explore it further.

One role of a teacher is to, without sacrificing rigor, promote student satisfaction and to inspire students to learn more about a topic for the rest of their lives.

 One role of a teacher is to, without sacrificing rigor, promote student satisfaction and to inspire students to learn more about a topic for the rest of their lives.  In some ways, I value student satisfaction and the inspiration to continue their intellectual journey more than the content itself. In this respect, I try and provide the student with an educational Experience rather than just another class.

So, like I began, I have applied my own teaching principles to my own journey in learning how to teach.  By doing so, I have learned how little I knew. I have defined my own success and pursued the skills to attain it. I have taken initiative in generating and expanding my own learning process. Without compromising rigor, I have also found satisfaction in the experience of teaching, inspiring me to continue learning about it the rest of my life. This experience, I hope, makes a difference to my students and allows them to find their own successful paths.

Science Lies? Tales from the Science Illuminati

I’m a physics professor at the California Polytechnic State University in San Luis Obispo, CA.  Recently I came tWriting on dooro work early to find my office door decorated with the word “LIES” written in a childish scrawl across a “I Support Science” Darwin Fish sticker I have in the window of my office door.  The graffito, written with a red whiteboard marker, was probably composed by a student the evening before while studying in the building.  It was a minor annoyance to remove it because it was written on the frosted matte side of the window that wasn’t really meant to be used as a whiteboard.  I notified my Chair and my Dean of the situation.  They were sympathetic and obviously found the vandalism inappropriate.

I think it bothered me for all the right reasons.  I’m reminded that campus climate is not exactly universally friendly toward certain scientific principles that happen to be in tension with people’s religion.  That’s not good.  It makes me uncomfortable.  But in addition to the message, what makes me feel strange is the willingness to deface a professor’s door at all.  Even if someone wrote “cool!” across the fish, it would feel weird.  Who does that?

But, I was also able to dismiss it for all the right reasons. When the best argument someone can muster against evolution is an anonymous “LIES” scribbled on a physics professor’s door in the middle of the night,  it betrays a lazy and crippling intellectual weakness.  The feeble anonymous assertion “LIES” seems a cowardly gasp.   It’s a spontaneous act by a creationist that un-coyly says “I strongly disagree with you.”  But it is weird language. A lie is a deliberate act to deceive.  It implies evolution is like a conspiracy perpetuated by the Science Illuminati.  It would be the kind of anti-establishment graffiti someone would see in the 70s.  Naturally, I know exactly what it means to write “LIES” across an “I Support Science” Darwin Fish.  It is obvious.   However, the word choice is funny.  I think what they really meant was “WRONG.”

Some peers have shrugged off the defacement with a “kids will be kids” attitude: “Yes, it’s inappropriate, but you sort of had it coming with that provocative sticker.”  It is a sad state of affairs when passively declaring support for one of the most evidence-based theoretical frameworks in all of science is considered “provocative.”  The most support I’ve received is from the students in my department.  They were genuinely shocked at the event and were actually concerned about me, unambiguously condemning the action.  One student wrote me a very touching email making it clear that he and the other students stood behind me.  Although an unfortunate context, that part really did make me feel greatly supported.  It is a privilege to work with such colleagues.

Now back to sacrificing another Schrödinger’s Goat in my weekly ritual to actively perpetuate my sinister New World Order Parameter.

The field near a conductor

This post is directed primarily at physics students and instructors and stems from discussions with my colleague Prof. Matt Moelter at Cal Poly, SLO. In introductory electrostatics there is a standard result involving the electric field near conducting and non-conducting surfaces that confuses many students.

Near a non-conducting sheet of charge with charge density \sigma, a straightforward application of gauss’s law gives the result

\vec{E}=\frac{\sigma}{2\epsilon_0}\hat{n}\ \ \ \ (1)&s=1

While, near the surface of a conductor with charge density \sigma, again, an application of gauss’s law gives the result

\vec{E}=\frac{\sigma}{\epsilon_0}\hat{n}\ \ \ \ (2)&s=1

The latter result comes about because the electric field inside a conductor in electrostatic equilibrium is zero, killing off the flux contribution of the gaussian pillbox inside the conductor. In the case of the sheet of charge, this same side of the pillbox distinctly contributed to the flux. Both methods are applied locally to small patches of their respective systems.

Although the two equations are derived from the same methods, they mean different things — and their superficial resemblance within factors of two can cause conceptual problems.

In Equation (1) the relationship between \sigma and \vec{E} is causal. That is, the electric field is due directly from the source charge density in question. It does not represent the field due to all sources in the problem, only the lone contribution from that local \sigma.

In Equation (2) the relationship between \sigma and \vec{E} is not a simple causal one, rather it expresses self-consistentancy, discussed more below. Here the electric field represents the net field outside of the conductor near the charge density in question. In other words, it automatically includes both the contribution from the local patch itself and the contributions from all other sources. It is has already added up all the contributions from all other sources in the space around it (this could, in some cases, include sources you weren’t aware of!).

How did this happen? First, in contrast to the sheet of charge where the charges are fixed in space, the charges in a conductor are mobile. They aren’t allowed to move while doing the “statics” part of electrostatics, but they are allowed to move in some transient sense to quickly facilitate a steady state. In steady state, the charges have all moved to the surfaces and we can speak of an electrostatic surface distribution on the conductor. This charge mobility always arranges the surface distributions to ensure \vec{E}=0 inside the conductor in electrostatic equilibrium. This is easy enough to implement mathematically, but gives rise to the subtle state of affairs encountered above. The \sigma on the conductor is responding to the electric fields generated by the presence of other charges in the system, but those other charges in the system are, in turn, responding to the local \sigma in question. Equation (2) then represents a statement of self-consistency, and it breaks the cycle using the power of gauss’s law. As a side note, the electric displacement vector, \vec{D}, plays a similar role of breaking the endless self-consistency cycle of polarization and electric fields in symmetric dielectric systems.

Let’s look at some examples.

Example 1:
Consider a large conducting plate versus large non-conducting sheet of charge. Each surface is of area A. The conductor has total charge Q, as does the non-conducting sheet. Find the electric field of each system. The result will be that the fields are the same for the conductor and non-conductor, but how can this be reconciled with Equation (1) and (2) which, at a glance, seem to give very different answers? See the figure below:

Conductor_1

For the non-conducting sheet, as shown in Figure (B) above, the electric field due to the source charge is given by Equation (1)

\vec{E}_{nc}=\frac{\sigma_{nc}}{2\epsilon_0}\hat{n}&s=1

where

\sigma_{nc}\equiv\sigma=Q/A&s=1

(“nc” for non-conducting) and \hat{n}=+\hat{z} above the positive surface and \hat{n}=-\hat{z} below it.

Now, in the case of the conductor, shown in Figure (A), Equation (2) tells us the net value of the field outside the conductor. This net value is expressed, remarkably, only in terms of the local charge density; but remember, for a conductor, the local charge density contains information about the entire set of sources in the space. At a glance, it seems the electric field might be twice the value of the non-conducting sheet. But no! This is because the charge density will be different than the non-conducting case. For the conductor, the charge responds to the presence of the other charges and spreads out uniformly over both the top and bottom surface; this ensures \vec{E}=0 inside the conductor. In this context, it is worth point out that there are no infinitely thin conductors. Infinitely thin sheets of charge are fine, but not conductors. There are always two faces to a thin conducting surface and the surface charge density must be (at least tacitly) specified on each. Even if a problem uses language that implies the conducting surface is infinitely thin, it can’t be.

For example, the following Figure for an “infinitely thin conducting surface with charge density \sigma“, which then applies Equation (2) to the setup to determine the field, makes no sense:

nonsenseconductor copy

This application of Equation (2) cannot be reconciled with Equation (1). We can’t have it both ways. An “infinitely thin conductor” isn’t a conductor at all and should reduce to Equation (1). To be a conductor, even a thin one, there needs to be (at least implicitly) two surfaces and a material medium we call “the conductor” that is independent of the charge.

Back to the example.

Conductor_1

If the charge Q is spread out uniformly over both sides of the conductor in Figure (A), the charge density for the conductor is then

\sigma_c=Q/2A=\sigma_{nc}/2=\sigma/2&s=1

(“c” for conducting). The factor of 2 comes in because each face has area A and the charge spreads evenly across both. Equation (2) now tells us what the field outside the conductor is. This isn’t just for the one face, but includes the net contributions from all sources

\vec{E}_{c}=\frac{\sigma_c}{\epsilon_0}\hat{n}=\frac{\sigma_{nc}}{2\epsilon_0}\hat{n}=\vec{E}_{nc}&s=1.

That is, the net field is the same for each case,

\vec{E}_{c}=\vec{E}_{nc}&s=1.

Even though Equations (1) and (2) might seem superficially inconsistent with each other for this situation, they give the same answer, although for different reasons. Equation (1) gives the electric field that results directly from \sigma alone. Equation (2) gives a self consistent net field outside the conductor, which uses information contained in the local charge density. The key is understanding that the surface charge density used for the sheet of charge and the conductor are different in each case. In the case of a charged sheet, we have the freedom to declare a surface with a fixed, unchanging charge density. With a conductor, we have less, if any, control over what the charges do once we place them on the surfaces.

It is worth noting that each individual surface of charge on the conductor has a causal contribution to the field still given by Equation (1), but only once the surface densities have been determined — with one important footnote. The net field in each region can be determined by adding up all the (shown) individual contributions in superposition only if the charges shown are the only charges in the problem and were allowed to relax into this equilibrium state due to the charges explicitly shown. This last point will be illustrated in an example at the end of this post. It turns out that you can’t just declare arbitrary charge distributions on conductors and expect those same charges you placed to be solely responsible for it. There may be “hidden sources” if you insist on keeping your favorite arbitrary distribution on a conductor. If you do, you must also account for those contributions if you want to determine the net field by superposition. However, all is not lost: amazingly, Equation (2) still accounts for those hidden sources for the net field! With Equation (2) you don’t need to know the individual fields from all sources in order to determine the net field. The local charge density on the conductor already includes this information!

Example 2:
Compare the field between a parallel plate capacitor with thin conducting sheets each having charge \pm Q and area A with the field between two non-conducting sheets of charge with charge \pm Q and area A. This situation is a standard textbook problem and forms the template for virtually all introductory capacitor systems. The result is that the field between the conducting plates are the same as the field between the non-conducting charge sheets, as shown in the figure below. But how can this be reconciled with Equations (1) and (2)? We use a treatment similar to those in Example 1.

Plates

Between the two non-conducting sheets, as shown in Figure (D), the top positive sheet has a field given by Equation (1), pointing down (call this the -\hat{z} direction) . The bottom negative sheet also has a field given by Equation (1) and it also points down. The charge density on the positive surface is given by \sigma=Q/A. We superimpose the two fields to get the net result

\vec{E}=\vec{E}_{1}+\vec{E}_2=\frac{+\sigma}{2\epsilon_0}(-\hat{z})+\frac{-\sigma}{2\epsilon_0}(+\hat{z})+=-\frac{\sigma}{\epsilon_0}(\hat{z})&s=1.

Above the top positive non-conducting sheet the field points up due to the top non-conducting sheet and down from the negative non-conducting sheet. Using Equation (1) they have equal magnitude, thus the fields cancel in this region after superposition. The fields cancel in a similar fasshion below the bottom non-conducting sheet.

Unfortunately, the setup for the conductor, shown in Figure (C), is framed in an apparently ambiguous way. However, this kind of language is typical in textbooks. Where is this charge residing exactly? If this is not interpreted carefully, it can lead to inconsistencies like those of the “infinite thin conductor” above. The first thing to appreciate is that, unlike the nailed down charge on the non-conducting sheets, the charge densities on the parallel conducting plates are necessarily the result of responding to each other. The term “capacitor” also implies that we start with neutral conductors and do work bringing charge from one, leaving the other with an equal but opposite charge deficit. Next, we recognize even thin conducting sheets have two sides. That is, the top sheet has a top and bottom and the bottom conducting sheet also has a top and bottom. If the conducting plates have equal and opposite charges, and those charges are responding to each other. They will be attracted to each other and thus reside on the faces that are pointed at each other. The outer faces will contain no charge at all. That is, the \sigma=Q/A from the top plate is on that plate’s bottom surface with none on the top surface. Notice, unlike Example 1, the conductor has the same charge density as its non-conducting counterpart. Similar for the bottom plate but with the signs reversed. A quick application of gauss’s law can also demonstrate the same conclusion.

With this in mind, we are left with a little puzzle. Since we know the charge densities, do we jump right to the answer using Equation (2)? Or do we now worry about the individual contributions of each plate using Equation (1) and superimpose them to get the net field? The choice is yours. The easiest path is to just use Equation (2) and write down the results in each region. Above and below all the plates, \sigma=0 so \vec{E}=0; again, Equation (2) has already done the superposition of the individual plates for us. In the middle, we can use either plate (but not both added…remember, this isn’t superposition!). If we used the top plate, we would get

\vec{E}=\frac{\sigma}{\epsilon_0}(-\hat{z})=-\frac{\sigma}{\epsilon_0}\hat{z}&s=1

and if we used the bottom plate alone, we would get

\vec{E}=\frac{-\sigma}{\epsilon_0}\hat{z}=-\frac{\sigma}{\epsilon_0}\hat{z}&s=1.

They both give the same individual result, which is the same result as the non-conducting sheet case above where we added individual contributions.

If were were asked “what is the force of the top plate on the bottom plate?” we actually do need to know the field due to the charge on the single top plate alone and apply it to the charge on the second plate. In this case, we are not just interested in the total field due to all charges in the space as given by Equation (2). In this case, the field due to the single top plate would indeed be given by Equation (1), as would the field due to the single bottom plate. We could then go on to superimpose those fields in each region to obtain the same result. That is, once the charge distributions are established, we can substitute the sheets of non-conducting charge in place of the conducting plates and use those field configurations in future calculations of energy, force, etc.

However, not all charge distributions for the conductor are the same. A strange consequence of all this is that, despite the fact that Example 1 gave us one kind of conductor configuration that was equivalent to single non-conducting sheet, this same conductor can’t be just transported in and made into a capacitor as shown in the next figure:

Conductor_3

On a conductor, we simply don’t have the freedom to invent a charge distribution, declare “this is a parallel plate capacitor,” and then assume the charges are consistent with that assertion. A charge configuration like Figure (E) isn’t a parallel plate capacitor in the usual parlance, although the capacitance of such a system could certainly be calculated. If we were to apply Equation (1) to each surface and superimpose them in each region, we might come to the conclusion that it had the same field as a parallel plate capacitor and conclude that Figure (E) was incorrect, particularly in the regions above and below the plates. However, Equation (2) tells us that the field in the region above the plates and below them cannot be zero despite what a quick application of Equation (1) might make us believe. What this tells us is that there must unseen sources in the space, off stage, that are facilitating the ongoing maintenance of this configuration. In other words, charges on conducting plates would not configure themselves in such a away unless there were other influences than the charges shown. If we just invent a charge distribution and impose it onto a conductor, we must be prepared to justify it via other sources, applied potentials, external fields, and so on.

So, even though plate (5) in Figure (E) was shown to be the same as a single non-conducting plate, we can’t just make substitutions like those in this figure. We can do this with sheets of charge, but not with other conductors. Yes, the configuration in Figure (E) is physically possible, it just isn’t the same as a parallel plate capacitor, even though each element analyzed in isolation makes it seem like it would be the same.

In short, Equations (1) and (2) are very different kinds of expressions. Equation (1) is a causal one that can be used in conjunction with the superposition principle: one is calculating a single electric field due to some source charge density. Equation (2) is more subtle and is a statement of self-consistency with the assumptions of a conductor in equilibrium. An application of Equation (2) for a conductor gives the net field due to all sources, not just the field do to the conducting patch with charge density sigma: it comes “pre-superimposed” for you.

Newton’s First Law is not a special case of his Second Law

When teaching introductory mechanics in physics, it is common to teach Newton’s first law of motion (N1) as a special case of the second (N2). In casual classroom lore, N1 addresses the branch of mechanics known as statics (zero acceleration) while N2 addresses dynamics (nonzero acceleration). However, without getting deep into concepts associated with Special and General Relativity, I claim this is not the most natural or effective interpretation of Newton’s first two laws.

N1 is the law of inertia. Historically, it was asserted as a formal launching point for Newton’s other arguments, clarifying misconceptions left over from the time of Aristotle. N1 is a pithy restatement of the principles established by Galileo, principles Newton was keenly aware of. Newton’s original language from the Latin can be translated roughly as “Law I: Every body persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed.” This is attempting to address the question of what “a natural state” of motion is. According to N1, a natural state for an object is not merely being at rest, as Aristotle would have us believe, but rather uniform motion (of which “at rest” is a special case). N1 claims that an object changes its natural state when acted upon by external forces.

N2 then goes on the clarify this point. N2, in Newton’s language as translated from the Latin was stated as “Law II: The alteration of motion is ever proportional to the motive force impress’d; and is made in the direction of the right line in which that force is impress’s.” In modern language, we would say that the net force acting on an object is equal to its mass times its acceleration, or

\vec{F}_{\rm net}=m\vec{a}&bg=ffffff&fg=000000&s=4

In the typical introductory classroom, problems involving N2 would be considered dynamics problems (forgetting about torques for a moment). A net force generates accelerations.

To recover statics, where systems are in equilibrium (again, modulo torques), students and professors of physics frequently then back-substitute from here and say something like: in the case where \vec{a}=0&bg=ffffff&fg=000000&s=1 clearly we recover N1, which can now be stated something like:

\vec{a}=0&bg=ffffff&fg=000000&s=2
if and only if
\vec{F}_{\rm net}=0&bg=ffffff&fg=000000&s=2

This latter assertion certainly looks like the mathematical formulation of Newton’s phrasing of N1. Moreover, it seemed to follow from the logic of N2 so, ergo, “N1 is a special case of N2.”

But this is all a bit too ham-fisted for my tastes. Never mind the nonsensical logic of why someone as brilliant as Newton would start his three laws of motion with special case of the second. That alone should give one pause. Moreover, Newton’s original language of the laws of motion is antiquated and does’t illuminate the important modern understanding very well. Although he was brilliant, we definitely know more physics now than Newton and understand his own laws at a deeper level than he did. For example, we have an appreciation for Electricity and Magnetism, Special Relativity, and General Relativity, all of which force one to clearly articulate Newton’s Laws at every turn, sometimes overthrowing them outright. This has forced physicists over the past 150 yeas to be very careful how the laws are framed and interpreted in modern terms.

So why isn’t N1 really a special case of N2?

I first gained an appreciation for why N1 is not best thought of as a special case of N2 when viewing the famous educational film called Frames of Reference by Hume and Donald Ivey (below), which I use in my Modern Physics classes when setting up relative motion and frames of reference. Then it really hit home later while teaching a course specifically about Special Relativity from a book by the same name by T.M by Helliwell.

A key modern function of N1 is that it defines inertial frames. Although Newton himself never really addresses inertial frames in his work, this modern interpretation is of central importance in modern physics. Without this way of interpreting it, N1 does functionally become a special case of N2 if you treat pseudoforces as actual forces. That is, if “ma” and the frame kinematics are considered forces. In such a world, N1 is redundent and there really are only two laws of motion (N2 and the third law, N3, which we aren’t discussing here). So why don’t we frame Newton’s laws this way. Why have N1 at all? One might be able to get away with this kind of thinking in a civil engineering class, but forces are very specific things in physics and “ma” is not amongst them.

So why is “ma” not a force and why do we care about defining inertial frames?

Basically an inertial frame is any frame where the first law is obeyed. This might sound circular, but it isn’t. I’ve heard people use the word “just” in that first point: “an inertial frame is just any frame where the first law is obeyed.” What’s the big deal? To appreciate the nuance a bit, the modern logic of N1 goes something like this:

if
\vec{F}_{\rm net}=0&bg=ffffff&fg=000000&s=2
and
\vec{a}=0&bg=ffffff&fg=000000&s=2
then you are in an inertial frame.

Note, this is NOT the same as a special case of N2 as stated above in the “if and only if” phrasing

\vec{a}=0&bg=ffffff&fg=000000&s=2
if and only if
\vec{F}_{\rm net}=0&bg=ffffff&fg=000000&s=2

That is, N1 is a one-way if-statement that provides a clear test for determining if your frame is inertial. The way you do this is you systematically control all the forces acting on an object and balance them, ensuring that the net force is zero. A very important aspect of this is that the catalog of what constitutes a force must be well defined. Anything called a “force” must be linked back to the four fundamental forces of nature and constitute a direct push or a pull by one of those forces. Once you have actively balanced all the forces, getting a net force of zero, you then experimentally determine if the acceleration is zero. If so, you are in an inertial frame. Note, as I’ve stated before, this does not include any fancier extensions of inertial frames having to do with the Principle of Equivalence. For now, just consider the simpler version of N1.

With this modern logic, you can also use modus ponens and assert that if your system is non-inertial, then you have can have either i) accelerations in the presence of apparently balanced forces or ii) apparently uniform motion in the presence of unbalanced forces.

The reason for determining if your frame is inertial or not is that N2, the law that determines the dynamics and statics for new systems you care about, is only valid in inertial frames. The catch is that one must use the same criteria for what constitutes a “force” that was used to test N1. That is, all forces must be linked back to the four fundamental forces of nature and constitute a direct push or a pull by one of those forces.

Let’s say you have determined you are in an inertial frame within the tolerances of your experiments. You can then go on to apply N2 to a variety of problems and assert the full powerful “if and only if” logic between forces and accelerations in the presence of any new forces and accelerations. This now allows you to solve both statics (no acceleration) and dynamics (acceleration not equal to zero) problems in a responsible and systematic way. I assert both statics and dynamics are special cases of N2. If you give up on N1 and treat it merely as a special case of N2 and further insist that statics is all N1, this worldview can be accommodated at a price. In this case, statics and dynamics cannot be clearly distinguished. You haven’t used any metric to determine if your frame is inertial. If you are in a non-inertial frame but insist on using N2, you will be forced to introduce pseudoforces. These are “forces” that cannot be linked back to pushes and pulls associated with the four fundamental forces of nature. Although it can be occasionally useful to use pseudoforces as if they were real forces, they are physically pathological. For example, every inertial frame will agree on all the forces acting on an object, able to link them back to the same fundamental forces, and thus agree on its state of motion. In contrast, every non-inertial frame will generally require a new set of mysterious and often arbitrary pseudoforces to rationalize the motion. Different non-inertial frames won’t agree on the state of motion and won’t generally agree on whether one is doing statics or dynamics! As mentioned, pseudoforces can be used in calculation, but it is most useful to do so when you actually know a priori that you are in a known non-inertial frame but wish to pretend it is inertial for practical reasons (for example, the rotating earth creates small pseudoforces such as the Coriolis force, the centrifugal force, and the transverse force, all byproducts of pretending the rotating earth is inertial when it really isn’t).

Here’s a simple example that illustrates why it is important not to treat N1 as special case of N2. Say Alice places a box on the ground and it doesn’t accelerate; she analyzes the forces in the frame of the box. The long range gravitational force of the earth on the box pulls down and the normal (contact) force of the surface of the ground on the box pushes up. The normal force and the gravitational force must balance since the box is sitting on the ground not accelerating. OR SO SHE THINKS. The setup said “the ground” not “the earth.” “The ground” is a locally flat surface upon which Alice stands and places objects like boxes. “The earth” is a planet and is a source of the long range gravitational field. You cannot be sure that the force you are attributing to gravity really is from a planet pulling you down or not (indeed, the Principle of Equivalence asserts that one cannot tell, but this is not the key to this puzzle).

Alice has not established that N1 is true in her frame and that she is in an inertial frame. This could cause headaches for her later when she tries to launch spacecraft into orbit. Yes, she thinks she knows all the forces at work on the box, but she hasn’t tested her frame. She really just applied backwards logic on N1 as a special case of N2 and assumed she was in an inertial frame because she observed the acceleration to be zero. This may seem like a “difference without a distinction,” as one of my colleagues put it. Yes, Alice can still do calculations as if the box were in static equilibrium and the acceleration was zero — at least in this particular instance at this moment. However, there is a difference that can indeed come back and bite her if she isn’t more careful.

How? Imagine that Alice was, unbeknownst to her or her ilk, on a large rotating (very light) ringworld (assuming ringworlds were stable and have very little gravity of their own). The inhabitants of the ringworld are unaware they are rotating and believe the rest of the universe is rotating around them (for some reason, they can’t see the other side of the ring). This ringworld frame is non-inertial but, as long as Alice sticks to the surface, it feels just like walking around on a planet. For Bob, an inertial observer outside the ringworld (who has tested N1 directly first), there is only once force on the box: the normal force of the ground that pushes the box towards the center of rotation and keeps the box in circular motion. All other inertial observers will agree with this analysis. This is very clearly a case of applying N2 with accelerations for the inertial observer. The box on the ground is a dynamics problem, not a statics problem. For Alice, who believes she is in an inertial frame by taking N1 to be a special case of N2 (having not tested N1!), she assumes there are two forces keeping the box in static equilibrium — it appears like a statics problem. Is this just a harmless attribution error? If it gives the same essential results, what is the harm? Again, in an engineering class for this one particular box under these conditions, perhaps this is good enough to move on. However, from a physics point of view, it introduces potentially very large problems down the road, both practical and philosophical. The philosophical problem is that Alice has attributed a long range force where non existed, turning “ma” into a force of nature, which is isn’t. That is, the gravity experienced by the ringworld observer is “artificial”: no physical long range force is pulling the box “down.” Indeed “down,” as observed by all inertial observers, is actually “out,” away from the ring. Gravity is a pseudoforce in this context. There has been a violation of what constitutes a “force” for physical systems and an unphysical, ad hoc, “force” had to be introduced to rationalize the observation of what appears to be zero local acceleration. Again, let us forgo any discussions of the Equivalence Principle here where gravity and accelerations can be entwined in funny ways.

This still might seem harmless at first. But image that Alice and her team on the ring fire a rocket upwards normal to the ground trying to exit or orbit their “planet” under the assumption that it is a gravitational body that pulls things down. They would find a curious thing. Rockets cannot leave their “planet” by being fired straight up, no matter how fast. The rockets always fall back and hit the ground and, despite being launched straight up with what seems to be only “gravity” acting on it, yet rocket trajectories always bend systematically in one directly and hit the “planet” again. Insisting the box test was a statics problem with N1 as a special case of N2, they have no explanation for the rocket’s behavior except to invent a new weird horizontal force that only acts on the rocket once launched and depends in weird ways on the rocket’s velocity. There does not seem to be any physical agent to this force and it cannot be attributed to the previously known fundamental forces of nature. There are no obvious sources of this force and it simply is present on an empirical level. In this case, it happens to be a Coriolis force. This, again, might seem an innocent attribution error. Who’s to say their mysterious horizontal forces aren’t “fundamental” for them? But it also implies that every non-inertial frame, every type of ringworld or other non-inertial system, one would have a different set of “fundamental forces” and that they are all valid in their own way. This concept is anathema to what physics is about: trying to unify forces rather than catalog many special cases.

In contrast, you and all other inertial observers, recognize the situation instantly: once the rocket leaves the surface and loses contact with the ringworld floor, no forces act on it anymore, so it moves in a straight line, hitting the far side of the ring. The ring has rotated some amount in the mean time. The “dynamics” the ring observers see during the rocket launch is actually a statics (acceleration equals zero) problem! So Alice and her crew have it all backwards. Their statics problem of the box on the ground is really a dynamics problem and their dynamics problem of launching a rocket off their world is really a statics problem! Since they didn’t bother to sysemtically test N1 and determine if they were in an inertial frame, the very notions of “statics” and “dynamics” is all turned around.

So, in short, a modern interpretation of Newton’s Laws of motional asserts that N1 is not a special case of N2. First establishing that N1 is true and that your fame is inertial is critical in establishing how one interprets the physics of a problem.

Coldest cubic meter in the universe

My collaborators in CUORE, the Cryogenic Underground Observatory for Rare Events, at the underground Gran Sasso National Laboratory in Assergi, Italy, have recently created (literally) the coldest cubic meter in the universe. For 15 days in September 2014, cryogenic experts in the collaboration were able to hold roughly one contiguous cubic meter of material at about 6 mK (that is, 0.006 degrees above absolute zero, the coldest possible temperature).

At first, a claim like “this is the coldest cubic meter in the [insert spacial scale like city/state/country/world/universe]” may sound like an exaggeration or a headline grabbing ruse. What about deep space? What about ice planets? What about nebulae? What about superconductors? Or cold atom traps? However, the claim is absolutely true in the sense that there are no known natural processes that can reliable create temperatures anywhere near 6 mK over a contiguous cubic meter anywhere in the known universe. Cold atom traps, laser cooling, and other remarkable ultracold technologies are able to get systems of atoms down to the bitter pK scale (a billionth of a degree above absolute zero). However, the key term here is “systems of atoms.” These supercooled systems are indeed tiny collections of atoms in very small spaces, nowhere near a cubic meter. Large, macroscopic superconductors can operate at liquid nitrogen or liquid helium temperatures, but those are very warm compared to what we are talking about here. Even deep space is sitting a at a balmy 2.7 K thanks to the cosmic microwave background radiation (CMBR). Some specialized thermodynamic conditions, such as those found the the Boomerang Nebula, may bring things down to a chilly 300-1000 mK because of the extended expansion of gases in a cloud over long times. The CMB cold spot is only 70 micro-kelvin below the CMBR.

However, the only process capable of reliably bringing a cubic meter vessel down to 6 mK are sentient creatures actively trying to do so. While nature could do it on its own in principle, via some exotic process or ultra-rare thermal fluctuation, the easiest natural path to such cold swaths of space, statistically sampled over a short 13.8 billion years, is to first evolve life, then evolve sentient creatures who then actively perform the project. So the only other likely way for there to be another competing cubic meter sitting at this temperature somewhere in the universe is for there to be sentient aliens who also made it happen. The idea behind the news angle “the coldest cubic meter” was the brainchild of my collaborator Jon Ouelett, a graduate student in physics at UC Berkeley and member of the CUORE working group responsible for achieving the cooldown. His take on this is written up nicely in his piece on the arXiv entitled The Coldest Cubic Meter in the Known Universe.

I’ve been member of the CUORE and Cuoricino collaborations since 2004 when I was a postdoc at Lawrence Berkeley Laboratory. I’m now a physics professor at California Polytechnic State University in San Luis Obispo and send undergraduate students to Gran Sasso help with shifts and other R&D activities during the summers through NSF support. Indeed, my students were at Gran Sasso when the cooldown occurred in September, but were working on another part of the project doing experimental shifts for CUORE-0. CUORE-0 is a precursor to CUORE and is currently running at Gran Sasso. It is cooled down to about 10 mK and is perhaps a top-10 contendeder for the coldest contiguous 1/20th of a cubic meter in the known universe.

I will write more about CUORE and its true purpose in coming posts.

On a speculative note, one must naturally wonder if this kind of technology could be utilized in large scale quantum computing or other tests of macroscopic quantum phenomenon. While there are many phonon quanta associated with so many crystals at these temperatures (and so the system is pretty far from the quantum ground state, and has likely decohered on any time scales we could measure) it is still intriguing to ask if some carefully selected macroscopic quantum states of such a large system could be manipulated systematically. Large-mass gravitational wave antennae, or Weber bars, have been cooled to a point where the system can be considered in a coherent quantum state from the right point of view. Such measurements usually take place with sensative SQUID detectors looking for ultra-small strains in the material. Perhaps this new CUORE technology, involving macroscopic mK-cooled crystal arrays, can be utilized in a similar fashion for a variety of purposes.

RHIC/AGS User’s Meeting Talk and Yeti Pancakes

I was recently invited to give a talk on neutrinoless double beta decay at the RHIC/AGS User’s Meeting at at Brookhaven National Laboratory. The talk was entitled “Neutrinoless Double Beta Decay: Tales from the Underground” and was a basic overview (for other physicists, targeted primarily at graduate students) of neutrino physics and the state of neutrinoless double beta decay. The talk was only 20+5, so there wasn’t time to get into a lot of detail.

It was great to be back at BNL and see some of my old friends and colleagues. It was particularly nice to see my mentor and friend Professor Dan Cebra again and meet his recent crew of graduate students.

Being asked to give a neutrinless double beta decay talk at a meeting entirely focused on the details of heavy ion physics is a little like a yeti a pancake: they are terms not usually used in the same sentence, but somehow it works. Their motivation was noble. At these meetings, the organizers typically pick a couple topics in nuclear physics that are outside their usual routine and have someone give them a briefing on it. This was exactly in that spirit.

To download the Standard Model Lagrangian I used in the talk, visit my old UC Davis site where you can find pdf and tex versions of it for your own use. If you are interested in investigating the hadron spectra I show in the talk, you can download my demonstration available in CDF format from the Wolfram Demonstration Project. The Feynman diagram for neutrionless double beta decay was taken from wikipedia. Most of the other figures are standard figures used in neutrinoless double beta decay talks. As a member of the CUORE collaboration, I used vetted information regarding our data and experiment.

Enjoy

HadronSpectra_bold_trim

Particles_11

Farewell Stuart


I am very saddened today to hear of the sudden passing of my colleague Stuart Freedman. He was a great scientist and a great mentor. I will miss his dry wit and gift to see right to the heart of an issue. As part of his Ph.D. work circa 1972 with Clauser he performed the first experimental result to show a violation of Bell’s inequality, demonstrating that quantum mechanics was not only complete but non-local in character. This was during a time where “dabbling” in the foundations of quantum mechanics was not particularly fashionable. However, his ambitious result paved the way for the later celebrated work of Aspect et al. and is sadly often forgotten in such discussions. The breadth of his contributions to science is uncanny, spanning many fields and specialties following him from Berkeley to Princeton, Stanford, University of Chicago, and back to Berkeley. He was a fellow of the American Physical Society and Member of the National Academy. At Berkeley, he held the prestigious Luis W. Alvarez Chair in Experimental Physics. I was most familiar with him in his recent role as the US spokesman for the CUORE collaboration, meeting him in 2005 while I was still a postdoc at Berkeley Lab. His voice of scientific leadership in our work will be greatly missed. It was a privilege to have worked with him and collaborated with him, and to name him amongst my mentors. Farewell, Stuart. You will be missed.