Saturday, November 30, 2024

Repetitions are Sequences

When doing a task like working out, a common pattern is to perform something like 100 reps, then 90 reps, then 80, and so on, until you’ve completely counted down to zero. But this pattern can also be expressed arithmetically.

We say that there are 11 terms in this sequence: 100, 90, 80, 70, 60, 50, 40, 30, 20, 10, and 0. Alternatively, we could count the terms by solving:

\[ 0 = 100 - 10 (n - 1) \]

Now, let S represent the sum of all the terms in our sequence, N represent the number of terms, and \( t_0 \) and \( t_1 \) represent the first and last terms of the sequence.

\[ S_n = \frac{n}{2} \cdot (t_1 + t_n) \]

If we're beginning at 100 and counting all the way down to zero, we plug those values into our equation to get the total sum of 550.

\[ S_n = \frac{11}{2} \cdot (100 + 0) = 550 \]

Thursday, November 21, 2024

Absence as Category Error

When you switch on a lightbulb, your eyes perceive photons, and some neurons in your brain activate. If you switch off the light, then so-called ‘off’ neurons activate.

Photoreceptors include rods, which are responsible for the detection of dim light, and cones, which function in bright light and are responsible for the ability to distinguish colours based on their unique spectral sensitivities. These cells each have a ciliary process, known as an outer segment, that consists of stacks of membranous discs where the proteins involved in light sensing and signalling are located. The rods and cones connect to bipolar cells. There are also neurons responsible for modifying visual signals, such as amacrine cells, which connect rod bipolar cells to cone bipolar cells, and horizontal cells, which mediate feedback inhibition to the photoreceptors. The cone bipolar cells connect to ganglion cells, which integrate the signals from the upstream neurons. The ganglion cell axons assemble to form the optic nerve for transmission of visual signals to the brain.[1]

You don’t actually “stop seeing” when you’re in the dark. No; the mind physically represents nothingness in a pattern of neurons. In the case of literal darkness (as opposed to cognitive dimness), photoreceptors include a special adaptation that allows us to see, even when there appears to be very little light.

Similarly, in physics, there is darkness—the void of space. But is it entirely correct to claim that it is empty? Scientists posit that it is populated with vacuum energy. Vacuum energy is a theoretical background energy throughout the universe, as modeled by the uncertainty principle.

The uncertainty principle can be visualized in this way: imagine a field where virtual particles are constantly popping in and out of existence, imperceptible to the eyes but verifiable by deduction and lab experiments. The implication of this is that the universe is, in a sense, 'charged.' This is a tenet of quantum field theory.

This theory additionally predicts a significant amount of hypothetical energy throughout the universe. But it seems to bear little cosmological consequence, as the energy density that we actually observe is much smaller than the hypothesized model.

But if we assume the standard model is correct, this knowledge gap is known as the cosmological constant problem. It is one of several ideas put forth to account for hidden (dark) states of matter and energy implied by the universe's accelerating expansion. The thing I'm pointing at here is that there's a gap--something is there, and even if it isn't "dark energy" -- then it is, at the very least, a gap in our understanding of the universe.

The point I’m trying to draw, however, is that when we attempt to observe or discuss "nothing," we inevitably encounter "something"—or we find that "nothing" itself is a direct or indirect reference to "something." I argue that it’s impossible to truly discuss "nothing." In a genuine vacuum, a place where absolutely nothing exists, there wouldn't be any fields to measure. There would be no spacetime to speculate about.

In that vein, asking "Why is there something rather than nothing?" is a category error. If we were to ask, "Why is it snowing?" one could at least try to formulate an answer: "Due to cold temperatures, water in the atmosphere froze and fell to the ground as ice crystals." In this scenario, we are asking about a specific feature of the map. Alternatively, we could claim it was because a giant snowman god in the sky caused it to snow. On the other hand, the question "Why is there something rather than nothing?" is unknowable, because we are no longer asking about a specific abstraction of the map—instead, you're asking what created the map itself.

Similarly, the phrase “nothing exists” is a kind of inverse category error—a claim that nothing is real. However, labels like "absence" or "nothing" often function as references or pointers to other things. 

If we must say it, the phrase "nothing exists" is not a self-contradicting statement but a humorous or horrifying statement of fact. Nothing exists.

"Nothing could ruin this moment." "Nothing can dim this light." "Nothing is too great a challenge." "Nothing lasts forever."


Friday, November 15, 2024

Definitions: Why Words Are Load-Bearing

Many words function through their extensional definitions—or the specific examples and instances that give them meaning. For example, consider when someone suggests that the solution to a problem is more ‘agency.’ But unfortunately, they may not elaborate further.

This can become a quasi "semantic stopping point," where someone uses, repeats, or hears a word without taking time to examine what it functionally means.

‘Just maximize agency,’ someone might say in the face of a problem. But we cannot formalize a coherent model or actionable plan from merely hearing the word ‘agency’ and holding a fuzzy, informal concept in mind. Sure, the word may evoke intensional definitions, e.g., related words like ‘autonomy,’ ‘responsibility,’ or ‘power’—but are these associations alone enough?

What does maximizing agency actually look like in practice? Does it mean giving more freedom? Increasing decision-making capacity? Creating more opportunities for action?

To attempt to answer such questions, we need extensional definitions—specific ideas, examples, and concepts that the term ‘agency’ points to.[1] In this sense, then, any given word may serve as a header for a broader class of related concepts. It is a class reference.

At first pass, agency appears to encompass a deeper form of intelligence involving philosophy, language, and various cognitive tools.

And that involves concrete physical resources, like energy and resources. But it also involves abstract concepts like strategy, discipline, psychological drives, and uncertainty tolerance—a particularly the willingness to engage with challenging ideas without flinching away.

These extensional definitions—the examples, instances, and related concepts—help us understand not just what something means, but how it operates in practice and relates to other ideas.

Agency, then, is much more than just responsibility or autonomy. The term is load-bearing and involves many other concepts, such as:

  • Self-regulation: The ability to control impulses and follow through on plans
  • Responsibility: Situational awareness and acceptance of consequences
  • Strategic thinking: the alignment of short-term and long-term goals
  • Uncertainty tolerance: The capacity to make decisions despite incomplete knowledge

As a general but not absolute rule, behind any intensional definition, there are extensional definitions that make the intensional definition work by giving it practical meaning and power.


Footnotes

  1. Extensional and intensional definitions ↩︎

Wednesday, November 13, 2024

Frequentism and Bayesianism

From Frequentism and Bayesianism, A Practical Introduction:

For frequentists, probability only has meaning in terms of a limiting case of repeated measurements. That is, if I measure the photon flux F from a given star (we’ll assume for now that the star’s flux does not vary with time), then measure it again, then again, and so on, each time I will get a slightly different answer due to the statistical error of my measuring device. In the limit of a large number of measurements, the frequency of any given value indicates the probability of measuring that value. For frequentists probabilities are fundamentally related to frequencies of events. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense.

For Bayesians, the concept of probability is extended to cover degrees of certainty about statements. Say a Bayesian claims to measure the flux F of a star with some probability P(F): that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of my knowledge of what the measurement result will be. For Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range. That probability codifies our knowledge of the value based on prior information and/or available data.

Monday, November 11, 2024

Four Forces

It bothers me that in popular science discourse, gravity is so frequently emphasized while other forces are overlooked. Nobody even discusses the strong and weak forces anymore! OK. Maybe they do sometimes and I’m just exaggerating. Furthermore, gravity is the weakest force! However, it does affect things on an infinite scale. Behold, a list of the four physical forces:

  • Strong interaction — This is the strongest force—the force that holds the nuclei of atoms together, binding protons and electrons to nuclei
  • Electromagnetism — Another force stronger than gravity—electromagnetism is the force that acts on charged particles. (e.g. light, radio waves, etc.)
  • Weak interaction — A force weaker than electromagnetism, involved in subatomic interactions like radioactive decay or the decay of unstable particles (e.g. like muons or nuclear reactions in the the Sun)
  • Gravity — The weakest force, but with range that inevitably affects large-scale things, like objects, planets, asteroids, and so on.

Saturday, November 09, 2024

Myths of Human Genetics

From Myths of Human Genetics, by John H. McDonald:

A fun way to teach the basics of genetics is to have students look at traits on themselves. Just about every biology student has, in one class or another, been asked to roll their tongue, look at their earlobes, or check their fingers for hair. Students can easily collect data on several different traits and learn about genes, dominant and recessive alleles, maybe even Hardy-Weinberg proportions. Best of all, these data don't require microscopes, petri dishes, or stinky fly food.

Unfortunately, what textbooks, lab manuals and web pages say about these human traits is mostly wrong. Most of the common, visible human traits that are used in classrooms do NOT have a simple one-locus, two-allele, dominant vs. recessive method of inheritance. Rolling your tongue is not dominant to non-rolling, unattached earlobes are not dominant to attached, straight thumbs are not dominant to hitchhiker's thumb, etc.

In some cases, the trait doesn't even fall into the two distinct categories described by the myth. For example, students are told that they either have a hitchhiker's thumb, which bends backwards at a sharp angle, or a straight thumb. In fact, the angle of the thumb ranges continuously, with most thumbs somewhere in the middle. This was clearly shown in the very first paper on the genetics of hitchhiker's thumb (Glass and Kistler 1953), yet 60 years later, teachers still ask students which of the two kinds of thumb they have.

In other cases, the trait really does fall into two categories, but it isn't determined by genetics. For example, students are asked to fold their arms, then told that the allele for having the right forearm on top is dominant. It is true that most people fall into two categories, right arm on top or left arm on top, but the very first study on the subject (Wiener 1932) clearly demonstrated that there is little or no genetic influence on this trait: pairs of right-arm parents are just about as likely to have right-arm children as are pairs of left-arm parents.

Some traits, such as tongue rolling, were originally described as fitting a simple genetic model, but later research revealed them to be more complicated. Other traits were shown from the very beginning to not fit the simple genetic model, but somehow textbook authors decided to ignore this. A quick search in the standard reference on human genetics, Online Mendelian Inheritance in Man (OMIM), makes it clear that most of these traits do not fit the simple genetic model. It is an embarrassment to the field of biology education that textbooks and lab manuals continue to perpetuate these myths.
https://udel.edu/~mcdonald/mythintro.html

Radioactive Dating

Matter is composed of chemical elements. Every chemical element has its own arrangement of protons, neutrons, and electrons. As a consequence, each element also has its own atomic number, which indicates the number of protons in its nucleus.

Every element also has varying isotopes—differing versions of itself that possess a non-standard number of neutrons in its nuclei. Some of those isotopes are unstable (radioactive), experience decay, and turn into different elements over time.

The process of tracing these radioactive impurities in materials is known as radiometric dating. For example, thanks to meteorite samples, we know that the Earth is around 4.5 billion years old. But how, exactly, do we know this?

There are various types of radiometrics, and the process can involve different elements—from carbon, rubidium, potassium, samarium, uranium, to thorium.

The elements uranium and thorium both decay into lead over billions of years. Thus, it is possible to determine the age of materials like rocks and meteorites by measuring various isotopes of lead and retroactively inferring their age: Pb-206, Pb-207, Pb-208, and Pb-204. All of these, except for Pb-204, are considered radiogenic isotopes.

This trick relies on the fact that uranium and thorium decay in a constant, predictable way over time. For example, uranium-238 has a half-life of about 4.5 billion years, and it decays into lead-206. Uranium-235 has a half-life of approximately 700 million years and decays into lead-207.1

Parent Isotope Stable Daughter Product Currently Accepted Half-Life Values
Uranium-238 Lead-206 4.5 billion years
Uranium-235 Lead-207 704 million years
Thorium-232 Lead-208 14.0 billion years
Rubidium-87 Strontium-87 48.8 billion years
Potassium-40 Argon-40 1.25 billion years
Samarium-147 Neodymium-143 106 billion years

Lead-lead dating does not directly involve uranium. Instead, it involves analyzing the ratios between specific amounts and isotopes of lead, the decay products of uranium and thorium.

Uranium-lead dating, on the other hand, relies on measuring the ratios via the decay routes of uranium and thorium. This method frequently involves sampling the mineral zircon. But it can also involve other minerals, such as monazite.

Why minerals? Why not just measure rocks? When formations like rocks develop, there’s a chance they may contain some preexisting amount of lead. This can derail measurements and produce unwieldy results. Additionally, the Earth is dynamic—magma and rocks are constantly undergoing change and having their geological clocks reset and tampered with.

Zircon, a crystal mineral, unlike rocks, essentially offers a clean starting point for the task of radiometric dating—because any lead found inside zircon almost certainly originated from decayed uranium and wasn’t there beforehand.

Due to zircon’s crystal lattice structure, it’s picky about its elemental friends. The structures like those found in zircon are useful for radiometric dating because they tend to reject lead during their formation while letting uranium in.

Theoretically, lead in an unusual oxidative state, like Pb+4, could potentially make its way into zircon.2 But the most common compounds of lead found are in a +2 oxidative state, not +4—this is due to the inert pair effect.3

Despite the advent of “uranium-lead” dating, “lead-lead” dating is still useful. One of the earliest measurements to determine the age of Earth was a lead-lead measurement. As it turns out, meteorites are quite useful in offering a bigger than earth point-of-view.

Meteorites, which mostly come from the asteroid belt between Mars and Jupiter, are remnants from the formation of the early solar system. As such, they largely remain unchanged. They serve as a useful reference point and cosmic timestamp hinting at when our solar system began.

Lead dating was used to determine the age of the Canyon Diablo meteorite from the Barringer Crater. The result suggested the meteorite was roughly 4.5 billion years old—a value that has been replicated hundreds of times by other tests.


Footnotes

  1. Mathematical Treasure: James A. Garfield’s Proof of the Pythagorean Theorem ↩︎
  2. Oxidation state and coordination environment of Pb in U-bearing minerals
  3. Periodicity and the s- and p-block elements

Tuesday, November 05, 2024

The Ethics of Belief

An excerpt from “The Ethics of Belief,” by William Kingdon Clifford—an essay on epistemology, rationality, and the care with which we should apply when forming our beliefs:

A bad action is always bad at the time when it is done, no matter what happens afterwards. Every time we let ourselves believe for unworthy reasons, we weaken our powers of self-control, of doubting, of judicially and fairly weighing evidence. We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to, and the evil born when one such belief is entertained is great and wide. But a greater and wider evil arises when the credulous character is maintained and supported, when a habit of believing for unworthy reasons is fostered and made permanent. If I steal money from any person, there may be no harm done by the mere transfer of possession; he may not feel the loss, or it may prevent him from using the money badly. But I cannot help doing this great wrong towards Man, that I make myself dishonest. What hurts society is not that it should lose its property, but that it should become a den of thieves; for then it must cease to be society. This is why we ought not to do evil that good may come; for at any rate this great evil has come, that we have done evil and are made wicked thereby. In like manner, if I let myself believe anything on insufficient evidence, there may be no great harm done by the mere belief; it may be true after all, or I may never have occasion to exhibit it in outward acts. But I cannot help doing this great wrong towards Man, that I make myself credulous. The danger to society is not merely that it should believe wrong things, though that is great enough; but that it should become credulous, and lose the habit of testing things and inquiring into them; for then it must sink back into savagery.

The harm which is done by credulity in a man is not confined to the fostering of a credulous character in others, and consequent support of false beliefs. Habitual want of care about what I believe leads to habitual want of care in others about the truth of what is told to me. Men speak the truth to one another when each reveres the truth in his own mind and in the other’s mind; but how shall my friend revere the truth in my mind when I myself am careless about it, when I believe things because I want to believe them, and because they are comforting and pleasant? Will he not learn to cry, “Peace,” to me, when there is no peace? By such a course I shall surround myself with a thick atmosphere of falsehood and fraud, and in that I must live. It may matter little to me, in my cloud-castle of sweet illusions and darling lies; but it matters much to Man that I have made my neighbours ready to deceive. The credulous man is father to the liar and the cheat; he lives in the bosom of this his family, and it is no marvel if he should become even as they are. So closely are our duties knit together, that whoso shall keep the whole law, and yet offend in one point, he is guilty of all.

To sum up: it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.

If a man, holding a belief which he was taught in childhood or persuaded of afterwards, keeps down and pushes away any doubts which arise about it in his mind, purposely avoids the reading of books and the company of men that call in question or discuss it, and regards as impious those questions which cannot easily be asked without disturbing it—the life of that man is one long sin against mankind.

If this judgment seems harsh when applied to those simple souls who have never known better, who have been brought up from the cradle with a horror of doubt, and taught that their eternal welfare depends on what they believe, then it leads to the very serious question, Who hath made Israel to sin?

It may be permitted me to fortify this judgment with the sentence of Milton – “A man may be a heretic in the truth; and if he believe things only because his pastor says so, or the assembly so determine, without knowing other reason, though his belief be true, yet the very truth he holds becomes his heresy.”

And with this famous aphorism of Coleridge – “He who begins by loving Christianity better than Truth, will proceed by loving his own sect or Church better than Christianity, and end in loving himself better than all.”

Inquiry into the evidence of a doctrine is not to be made once for all, and then taken as finally settled. It is never lawful to stifle a doubt; for either it can be honestly answered by means of the inquiry already made, or else it proves that the inquiry was not complete.

“But,” says one, “I am a busy man; I have no time for the long course of study which would be necessary to make me in any degree a competent judge of certain questions, or even able to understand the nature of the arguments.” Then he should have no time to believe.

Saturday, November 02, 2024

Origins of Life

Today I learned the abiotic origin of organic compounds was established in the early 1800s, but the experiment wasn't actually intended to put forth a hypothesis for "abiogenesis"—or how life began on Earth.

The question of abiogenesis is the following one: how does so-called inanimate, non-living matter become animate, living matter? 

Friedrich Wöhler's so-called seminal contributions to organic chemistry would eventually lead to further hypothesis exploration about abiogenesis. Wöhler took two inorganic compounds—silver cyanate and ammonium chloride—and synthesized them to create urea, an organic compound that was previously believed to only be produced by living things carrying a "life force."

After Wohler's experiment, a large number of similar organic chemistry experiments would follow throughout the 19th century—and later those experiments would be followed by the Miller-Urey experiment.

The Miller experiment explored an origin of life scenario—simulating possible early conditions on Earth. By combining the gases methane (CH4), ammonia (NH3), and hydrogen (H2) with water—and exposing this amalgamation to electricity—various amino acids were produced, which are the building blocks of proteins. The related hypothesis is known as the prebiotic or primordial soup hypothesis.

But is the “prebiotic soup” theory a reasonable explanation for the emergence of life? Contemporary geoscientists tend to doubt that the primitive atmosphere had the highly reducing composition used by Miller in 1953. Many have suggested that the organic compounds needed for the origin of life may have originated from extraterrestrial sources such as meteorites. However, there is evidence that amino acids and other biochemical monomers found in meteorites were synthesized in parent bodies by reactions similar to those in the Miller experiment. Localized reducing environments may have existed on primitive Earth, especially near volcanic plumes, where electric discharges may have driven prebiotic synthesis. In the early 1950s, several groups were attempting organic synthesis under primitive conditions. But it was the Miller experiment, placed in the Darwinian perspective provided by Oparin’s ideas and deeply rooted in the 19th-century tradition of synthetic organic chemistry, that almost overnight transformed the study of the origin of life into a respectable field of inquiry. (via Prebiotic Soup—Revisiting the Miller Experiment)

The question of whether Earth's early atmospheric conditions were different from those in the Miller experiment is up for debate. The synthesis, however, continues to be a pioneering experiment in the study of abiogenesis—since it has further demonstrated that inorganic compounds can result in the formation of simple-to-complex organic compounds under circumstances potentially like those following asteroid impacts on Earth during the prebiotic atmosphere.

The hypotheses involving volcanic plumes and hydrothermal vents aren't the only abiogenesis hypotheses, of course. But they are particularly compelling ones, since one of the earliest forms of life on Earth was discovered in a ~3.42-billion-year-old subseafloor hydrothermal environment.

However, our last universal common ancestor is thought to have lived 4.2 billion years ago.

Friday, November 01, 2024

On Forgetting

I saw a post on Twitter today. Someone asked, “There are a number of techniques to help one recall and remember anything. From a neuroscience perspective, is it possible to intentionally learn to forget?”

Questions and thoughts like this are amusing. Someone appends the word "neuroscience" to a question or remark that is, in a way, about neuroscience. But it is also capable of being thought of in simpler, broader terms.

The term neuroscience sometimes invokes ideas of complex explanations—often for totally mundane things. The mere use of the word in a discussion can make any argument sound more authoritative than it actually may be.

What I'm trying to say is that, it seems to me that, more often than we might expect, neuroscience is about… other things. (Unless you work in a lab somewhere.)

For example, take the question "How can I use the power of neuroscience to forget bad memories?" Practical, useful answers probably have very little to do with neuroscience itself.

I believe it’s both more helpful and accurate to think of “forgetting” as one's brain recontextualizing and reshaping itself. 

We find ourselves on the outside of an old context we were once in and now in a new context we've yet to become fully self-aware of.

And I can't shake the intuition that, a lot of what happens during such a change, relies on messy meaning-making processes, rather than formulaic things we can pin down exactly with computational models.

I’m also not convinced that “forgetting” is a completely achievable objective in the first place--to literally be able to erase a memory. Episodic and long-term memories tend to stick around. This is a feature of the mind, not a bug.

For example, our brain remembers that time we burned our hand on the stove, so in the future, we don't do that again. Initially, we wince. But later, our brain molds itself around this memory. Without the ability to store information over large time scales, neither language, relationships, nor our own personal identities would develop.

Your memories probably don't change much. It’s your perception that rotates as you learn new things about your memories. There’s a lot of literature to support this hypothesis (brain plasticity, constructivism, memory consolidation, etc.).

But our brains are also imperfect—prone to distortion, illusions, and biases. Sometimes it wishes it could more clearly remember things. And other times, it wishes it could forget them.

In the spirit of Hebbian theory—"what fires together, wires together"—it seems to me that the only way to even come close to “forgetting” a memory is to replace it with a more powerful one. 

In other words, you think you want to forget, but what you really want is to think new thoughts. And thinking new thoughts requires either noticing new things—or seeing old things in new ways—or visiting new places, either figuratively and/or literally.

The primary paradox of the past is this: the past has a grip on us, not because of the past itself, but because we're unable to conceptualize things that haven't occurred.

In the end, I tend to believe that forgetting is more about changing how we understand and relate to our memories than anything to do with forgetting itself. 

Using Python To Access archive.today, July 2025

It seems like a lot of the previous software wrappers to interact with archive.today (and archive.is, archive.ph, etc) via the command-line ...