Saturday, November 30, 2024

Repetitions are Sequences

When doing a task like working out, a common pattern is to perform something like 100 reps, then 90 reps, then 80, and so on, until you’ve completely counted down to zero. But this pattern can also be expressed arithmetically.

We say that there are 11 terms in this sequence: 100, 90, 80, 70, 60, 50, 40, 30, 20, 10, and 0. Alternatively, we could count the terms by solving:

\[ 0 = 100 - 10 (n - 1) \]

Now, let S represent the sum of all the terms in our sequence, N represent the number of terms, and \( t_0 \) and \( t_1 \) represent the first and last terms of the sequence.

\[ S_n = \frac{n}{2} \cdot (t_1 + t_n) \]

If we're beginning at 100 and counting all the way down to zero, we plug those values into our equation to get the total sum of 550.

\[ S_n = \frac{11}{2} \cdot (100 + 0) = 550 \]

Thursday, November 21, 2024

Absence as Category Error

When you switch on a lightbulb, your eyes perceive photons, and some neurons in your brain activate. If you switch off the light, then so-called ‘off’ neurons activate.

Photoreceptors include rods, which are responsible for the detection of dim light, and cones, which function in bright light and are responsible for the ability to distinguish colours based on their unique spectral sensitivities. These cells each have a ciliary process, known as an outer segment, that consists of stacks of membranous discs where the proteins involved in light sensing and signalling are located. The rods and cones connect to bipolar cells. There are also neurons responsible for modifying visual signals, such as amacrine cells, which connect rod bipolar cells to cone bipolar cells, and horizontal cells, which mediate feedback inhibition to the photoreceptors. The cone bipolar cells connect to ganglion cells, which integrate the signals from the upstream neurons. The ganglion cell axons assemble to form the optic nerve for transmission of visual signals to the brain.[1]

You don’t actually “stop seeing” when you’re in the dark. No; the mind physically represents nothingness in a pattern of neurons. In the case of literal darkness (as opposed to cognitive dimness), photoreceptors include a special adaptation that allows us to see, even when there appears to be very little light.

Similarly, in physics, there is darkness—the void of space. But is it entirely correct to claim that it is empty? Scientists posit that it is populated with vacuum energy. Vacuum energy is a theoretical background energy throughout the universe, as modeled by the uncertainty principle.

The uncertainty principle can be visualized in this way: imagine a field where virtual particles are constantly popping in and out of existence, imperceptible to the eyes but verifiable by deduction and lab experiments. The implication of this is that the universe is, in a sense, 'charged.' This is a tenet of quantum field theory.

This theory additionally predicts a significant amount of hypothetical energy throughout the universe. But it seems to bear little cosmological consequence, as the energy density that we actually observe is much smaller than the hypothesized model.

But if we assume the standard model is correct, this knowledge gap is known as the cosmological constant problem. It is one of several ideas put forth to account for hidden (dark) states of matter and energy implied by the universe's accelerating expansion. The thing I'm pointing at here is that there's a gap--something is there, and even if it isn't "dark energy" -- then it is, at the very least, a gap in our understanding of the universe.

The point I’m trying to draw, however, is that when we attempt to observe or discuss "nothing," we inevitably encounter "something"—or we find that "nothing" itself is a direct or indirect reference to "something." I argue that it’s impossible to truly discuss "nothing." In a genuine vacuum, a place where absolutely nothing exists, there wouldn't be any fields to measure. There would be no spacetime to speculate about.

In that vein, asking "Why is there something rather than nothing?" is a category error. If we were to ask, "Why is it snowing?" one could at least try to formulate an answer: "Due to cold temperatures, water in the atmosphere froze and fell to the ground as ice crystals." In this scenario, we are asking about a specific feature of the map. Alternatively, we could claim it was because a giant snowman god in the sky caused it to snow. On the other hand, the question "Why is there something rather than nothing?" is unknowable, because we are no longer asking about a specific abstraction of the map—instead, you're asking what created the map itself.

Similarly, the phrase “nothing exists” is a kind of inverse category error—a claim that nothing is real. However, labels like "absence" or "nothing" often function as references or pointers to other things. 

If we must say it, the phrase "nothing exists" is not a self-contradicting statement but a humorous or horrifying statement of fact. Nothing exists.

"Nothing could ruin this moment." "Nothing can dim this light." "Nothing is too great a challenge." "Nothing lasts forever."


Friday, November 15, 2024

Definitions: Why Words Are Load-Bearing

Many words function through their extensional definitions—or the specific examples and instances that give them meaning. For example, consider when someone suggests that the solution to a problem is more ‘agency.’ But unfortunately, they may not elaborate further.

This can become a quasi "semantic stopping point," where someone uses, repeats, or hears a word without taking time to examine what it functionally means.

‘Just maximize agency,’ someone might say in the face of a problem. But we cannot formalize a coherent model or actionable plan from merely hearing the word ‘agency’ and holding a fuzzy, informal concept in mind. Sure, the word may evoke intensional definitions, e.g., related words like ‘autonomy,’ ‘responsibility,’ or ‘power’—but are these associations alone enough?

What does maximizing agency actually look like in practice? Does it mean giving more freedom? Increasing decision-making capacity? Creating more opportunities for action?

To attempt to answer such questions, we need extensional definitions—specific ideas, examples, and concepts that the term ‘agency’ points to.[1] In this sense, then, any given word may serve as a header for a broader class of related concepts. It is a class reference.

At first pass, agency appears to encompass a deeper form of intelligence involving philosophy, language, and various cognitive tools.

And that involves concrete physical resources, like energy and resources. But it also involves abstract concepts like strategy, discipline, psychological drives, and uncertainty tolerance—a particularly the willingness to engage with challenging ideas without flinching away.

These extensional definitions—the examples, instances, and related concepts—help us understand not just what something means, but how it operates in practice and relates to other ideas.

Agency, then, is much more than just responsibility or autonomy. The term is load-bearing and involves many other concepts, such as:

  • Self-regulation: The ability to control impulses and follow through on plans
  • Responsibility: Situational awareness and acceptance of consequences
  • Strategic thinking: the alignment of short-term and long-term goals
  • Uncertainty tolerance: The capacity to make decisions despite incomplete knowledge

As a general but not absolute rule, behind any intensional definition, there are extensional definitions that make the intensional definition work by giving it practical meaning and power.


Footnotes

  1. Extensional and intensional definitions ↩︎

Wednesday, November 13, 2024

Frequentism and Bayesianism

From Frequentism and Bayesianism, A Practical Introduction:

For frequentists, probability only has meaning in terms of a limiting case of repeated measurements. That is, if I measure the photon flux F from a given star (we’ll assume for now that the star’s flux does not vary with time), then measure it again, then again, and so on, each time I will get a slightly different answer due to the statistical error of my measuring device. In the limit of a large number of measurements, the frequency of any given value indicates the probability of measuring that value. For frequentists probabilities are fundamentally related to frequencies of events. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense.

For Bayesians, the concept of probability is extended to cover degrees of certainty about statements. Say a Bayesian claims to measure the flux F of a star with some probability P(F): that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of my knowledge of what the measurement result will be. For Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range. That probability codifies our knowledge of the value based on prior information and/or available data.

Monday, November 11, 2024

Four Forces

It bothers me that in popular science discourse, gravity is so frequently emphasized while other forces are overlooked. Nobody even discusses the strong and weak forces anymore! OK. Maybe they do sometimes and I’m just exaggerating. Furthermore, gravity is the weakest force! However, it does affect things on an infinite scale. Behold, a list of the four physical forces:

  • Strong interaction — This is the strongest force—the force that holds the nuclei of atoms together, binding protons and electrons to nuclei
  • Electromagnetism — Another force stronger than gravity—electromagnetism is the force that acts on charged particles. (e.g. light, radio waves, etc.)
  • Weak interaction — A force weaker than electromagnetism, involved in subatomic interactions like radioactive decay or the decay of unstable particles (e.g. like muons or nuclear reactions in the the Sun)
  • Gravity — The weakest force, but with range that inevitably affects large-scale things, like objects, planets, asteroids, and so on.

Saturday, November 09, 2024

Myths of Human Genetics

From Myths of Human Genetics, by John H. McDonald:

A fun way to teach the basics of genetics is to have students look at traits on themselves. Just about every biology student has, in one class or another, been asked to roll their tongue, look at their earlobes, or check their fingers for hair. Students can easily collect data on several different traits and learn about genes, dominant and recessive alleles, maybe even Hardy-Weinberg proportions. Best of all, these data don't require microscopes, petri dishes, or stinky fly food.

Unfortunately, what textbooks, lab manuals and web pages say about these human traits is mostly wrong. Most of the common, visible human traits that are used in classrooms do NOT have a simple one-locus, two-allele, dominant vs. recessive method of inheritance. Rolling your tongue is not dominant to non-rolling, unattached earlobes are not dominant to attached, straight thumbs are not dominant to hitchhiker's thumb, etc.

In some cases, the trait doesn't even fall into the two distinct categories described by the myth. For example, students are told that they either have a hitchhiker's thumb, which bends backwards at a sharp angle, or a straight thumb. In fact, the angle of the thumb ranges continuously, with most thumbs somewhere in the middle. This was clearly shown in the very first paper on the genetics of hitchhiker's thumb (Glass and Kistler 1953), yet 60 years later, teachers still ask students which of the two kinds of thumb they have.

In other cases, the trait really does fall into two categories, but it isn't determined by genetics. For example, students are asked to fold their arms, then told that the allele for having the right forearm on top is dominant. It is true that most people fall into two categories, right arm on top or left arm on top, but the very first study on the subject (Wiener 1932) clearly demonstrated that there is little or no genetic influence on this trait: pairs of right-arm parents are just about as likely to have right-arm children as are pairs of left-arm parents.

Some traits, such as tongue rolling, were originally described as fitting a simple genetic model, but later research revealed them to be more complicated. Other traits were shown from the very beginning to not fit the simple genetic model, but somehow textbook authors decided to ignore this. A quick search in the standard reference on human genetics, Online Mendelian Inheritance in Man (OMIM), makes it clear that most of these traits do not fit the simple genetic model. It is an embarrassment to the field of biology education that textbooks and lab manuals continue to perpetuate these myths.
https://udel.edu/~mcdonald/mythintro.html

Radioactive Dating

Matter is composed of chemical elements. Every chemical element has its own arrangement of protons, neutrons, and electrons. As a consequence, each element also has its own atomic number, which indicates the number of protons in its nucleus.

Every element also has varying isotopes—differing versions of itself that possess a non-standard number of neutrons in its nuclei. Some of those isotopes are unstable (radioactive), experience decay, and turn into different elements over time.

The process of tracing these radioactive impurities in materials is known as radiometric dating. For example, thanks to meteorite samples, we know that the Earth is around 4.5 billion years old. But how, exactly, do we know this?

There are various types of radiometrics, and the process can involve different elements—from carbon, rubidium, potassium, samarium, uranium, to thorium.

The elements uranium and thorium both decay into lead over billions of years. Thus, it is possible to determine the age of materials like rocks and meteorites by measuring various isotopes of lead and retroactively inferring their age: Pb-206, Pb-207, Pb-208, and Pb-204. All of these, except for Pb-204, are considered radiogenic isotopes.

This trick relies on the fact that uranium and thorium decay in a constant, predictable way over time. For example, uranium-238 has a half-life of about 4.5 billion years, and it decays into lead-206. Uranium-235 has a half-life of approximately 700 million years and decays into lead-207.1

Parent Isotope Stable Daughter Product Currently Accepted Half-Life Values
Uranium-238 Lead-206 4.5 billion years
Uranium-235 Lead-207 704 million years
Thorium-232 Lead-208 14.0 billion years
Rubidium-87 Strontium-87 48.8 billion years
Potassium-40 Argon-40 1.25 billion years
Samarium-147 Neodymium-143 106 billion years

Lead-lead dating does not directly involve uranium. Instead, it involves analyzing the ratios between specific amounts and isotopes of lead, the decay products of uranium and thorium.

Uranium-lead dating, on the other hand, relies on measuring the ratios via the decay routes of uranium and thorium. This method frequently involves sampling the mineral zircon. But it can also involve other minerals, such as monazite.

Why minerals? Why not just measure rocks? When formations like rocks develop, there’s a chance they may contain some preexisting amount of lead. This can derail measurements and produce unwieldy results. Additionally, the Earth is dynamic—magma and rocks are constantly undergoing change and having their geological clocks reset and tampered with.

Zircon, a crystal mineral, unlike rocks, essentially offers a clean starting point for the task of radiometric dating—because any lead found inside zircon almost certainly originated from decayed uranium and wasn’t there beforehand.

Due to zircon’s crystal lattice structure, it’s picky about its elemental friends. The structures like those found in zircon are useful for radiometric dating because they tend to reject lead during their formation while letting uranium in.

Theoretically, lead in an unusual oxidative state, like Pb+4, could potentially make its way into zircon.2 But the most common compounds of lead found are in a +2 oxidative state, not +4—this is due to the inert pair effect.3

Despite the advent of “uranium-lead” dating, “lead-lead” dating is still useful. One of the earliest measurements to determine the age of Earth was a lead-lead measurement. As it turns out, meteorites are quite useful in offering a bigger than earth point-of-view.

Meteorites, which mostly come from the asteroid belt between Mars and Jupiter, are remnants from the formation of the early solar system. As such, they largely remain unchanged. They serve as a useful reference point and cosmic timestamp hinting at when our solar system began.

Lead dating was used to determine the age of the Canyon Diablo meteorite from the Barringer Crater. The result suggested the meteorite was roughly 4.5 billion years old—a value that has been replicated hundreds of times by other tests.


Footnotes

  1. Mathematical Treasure: James A. Garfield’s Proof of the Pythagorean Theorem ↩︎
  2. Oxidation state and coordination environment of Pb in U-bearing minerals
  3. Periodicity and the s- and p-block elements

Using Python To Access archive.today, July 2025

It seems like a lot of the previous software wrappers to interact with archive.today (and archive.is, archive.ph, etc) via the command-line ...