Thank you for reading and for your thoughtful questions!
A lot of the building that goes into making a scientific machine like mine isn’t too different from engineering. I worked as a mechanical/systems engineer in industry before starting graduate school, so I can confirm that the biggest difference is that I no longer need to be able to sell what I build. This means our tools are not usually very user-friendly — but they definitely get the job done. This is also true for the code we write to control our machine and data acquisition.
While building our machine, we have many tests and diagnostics we can run that aren’t too different from what an electrical engineer might do. For example, if we need a PID (proportional, integral, differential) lockbox that we want to use to stabilize the frequency of a laser, we design it to some specification of bandwidth. This tells us how quickly the stabilization electronics can react. To test the bandwidth after we build the lockbox, we use what’s called a network analyzer. A network analyzer sends electrical signals that vary in frequency over time (it “sweeps” the frequency) to our lockbox and then records and plots the response of the box. With this tool, we can see the frequency range over which our new lockbox can stabilize our laser signal. After we successfully stabilize the frequency of our laser with this lockbox, we directly measure the frequency stability of the laser by looking at something called the laser linewidth. To do this, we use another common electrical engineering tool called a spectrum analyzer.
We do many, many small tests like these on every component of our machine. We iterate through these steps of designing, building, and testing small electrical and optical components until everything is working as it should. These sorts of electronic and control system applications are well understood in theory, so we usually know what to aim for.
What I’ve described is slightly different than your question on how we know if new experimental results are valid. Once we confirm that the engineering aspects are working correctly, it is still very possible we get a signal from our atoms that no one has seen before. As a matter of fact, that’s what we hope for.
Luckily, there are many established techniques and observations in my field that have been well characterized both experimentally and theoretically. We can recreate some of these simpler, well-understood techniques to confirm our machine is working. Once we do so, we can use the solid foundation of knowledge in atomic physics to think about which procedures people have not yet tried and in which parameter spaces we might expect to see something new.
When we’re designing the procedure for a new experiment, we purposefully aim for these untested parameter spaces. Before we physically do anything, however, we spend a lot of time thinking about what we should expect to see based on the current, accepted models of physics. In my research, the most relevant fields are quantum electrodynamics (QED) and atomic physics more broadly. Diving into these theories involves a lot of math, but our predictions also rely heavily on what physicists call our “physical intuition.” I am early in my academic career, but I have already spent about eight years of concentrated studying of advanced physics topics, both in the classroom and in research. My PhD advisor has spent roughly thirty-five years doing the same. Over this time period, a person builds up an intuitive framework of how certain physical systems behave under given conditions. It’s a combination of the math and our physical intuition that can guide us to understand if the signal we’re seeing is a real physical effect or if it’s a result of the lab upstairs turning on a really big magnet without telling us (this happens way more often than you might think).
Of course, we also rely heavily on statistics to tell us whether a signal is real or not. If we see a huge signal once and can never reproduce it, we conclude it was a systematic error in our experiment. Similarly, we would not feel confident about a reproducible signal if the signal-to-noise ratio (SNR) was too low. In that case, we would try to increase the SNR or search a different parameter space in which a signal is more attainable.
As for your last question, I would say that most academic papers in my field give near-sufficient information to reproduce a result. Reaching out to authors of papers to ask clarifying questions on specific parameters (frequencies, scattering lengths, wavelengths, optical powers, etc.) is common. World-wide, atomic physics is a small community and there is constant collaboration and discussion. Even so, if someone outside the field were to read our papers without context, reproducing the results would be difficult or impossible. Atomic physics (like many scientific fields) has a shared language that can convey complex concepts very concisely. For example, I could say, “we create a 3D MOT from a vapor of 133Cs atoms over 50ms before transferring to a wide (8um waist) optical dipole trap that loads our tight optical dipole trap (2um waist) in the orthogonal direction,” and an atomic physicist who has an apparatus in their lab to work with Cesium atoms would be able to recreate the scenario. First, they might reach out to me to ask what the wavelength of my trapping laser is. The recreation also depends on the assumption that you know how to make a MOT (magneto-optical trap) and an optical dipole trap (often called optical tweezers in popular science articles).
In the coming months, I plan to publish several pieces on some of these typical atomic physics procedures. I’ll explain the basics of how we “trap” atoms, how we can cool them down to near absolute zero, and how we measure their temperature in the first place. Keep an eye out if you’d like to learn more, and thank you again for reading and your questions!