<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en" xmlns="http://www.w3.org/2005/Atom">
    <link href="https://simonsafar.com/index.xml" rel="self"></link>
    <link href="https://simonsafar.com" rel="alternate" type="text/html" hreflang="en"></link>
    <title>Simon Safar</title>
    <subtitle>All recent entries from simonsafar.com</subtitle>
    <id>https://simonsafar.com/index.xml</id>
    <updated>2026-03-29T01:35:03.574869-07:00</updated>
    <entry>
        <title type="html">Claude and Taxes</title>
        <link href="https://simonsafar.com/2026/claude_and_taxes/"></link>
        <id>https://simonsafar.com/2026/claude_and_taxes/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2026-03-27T17:00:00.000000-07:00</published>
        <updated>2026-03-27T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2026/claude_and_taxes/">
            
        &lt;h1&gt; Claude and Taxes &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2026/03/28 &lt;/div&gt;

        &lt;p&gt;
          Last year, I filed my tax return in October. Technically, you&apos;re supposed to send them out by April 15th; procrastinating on doing them is definitely easier though, and that was the &lt;i&gt;second&lt;/i&gt; deadline you can catch.
        &lt;/p&gt;

        &lt;p&gt;
          Admittedly, some might consider the way I&apos;m doing taxes to be... somewhat unnatural? See also: &lt;a href=&quot;/2024/filing_taxes/&quot;&gt;Shell Scripts, Emacs and Taxes&lt;/a&gt;; it&apos;s text files, processed by OpenTaxSolver, invoked by a &lt;i&gt;build script&lt;/i&gt;. Just to quote that other article though:
        &lt;/p&gt;

        &lt;blockquote&gt;
          If your main job is not filing tax returns, you will typically forget what and how to file by the time the next return is due. With this approach, you can just copy over the previous year&apos;s build.sh script to the current one; search and replace &amp;quot;2022&amp;quot; with &amp;quot;2023&amp;quot;, adjust some page numbers, and fill in the fresh numbers to the templates. It&apos;s easy to notice that you had some number in a line last year &amp;amp; this year&apos;s is still missing.
        &lt;/blockquote&gt;

        &lt;p&gt;
          ... well, and then patch up OpenTaxSolver to handle the kind of cap gains that they didn&apos;t consider anyone would want to report. And write various Python scripts, parsing broker transaction CSVs. Does TurboTax do this for you though...? or do you have to fill this out, by hand...?
        &lt;/p&gt;

        &lt;p&gt;
          There is, though, a point where hit the point of no return. (... assuming you do want to file a... return, after all.) It&apos;s &lt;i&gt;almost&lt;/i&gt; done, just dig up that stupid table from &lt;i&gt;there&lt;/i&gt; and adjust the cost basis &lt;i&gt;there&lt;/i&gt;, it&apos;s 11pm already, next day is work and you still need to print all the pages... so &lt;i&gt;even if you switched over to TurboTax&lt;/i&gt;, it&apos;d be a lot more work. So you just push through.
        &lt;/p&gt;

        &lt;p&gt;
          And there is something rewarding in the printer spitting out the reams of completed forms, engulfing itself in a slight smell of ozone. &lt;i&gt;You&apos;ve done it.&lt;/i&gt;
        &lt;/p&gt;

        &lt;p&gt;
          The article was written &lt;i&gt;ages&lt;/i&gt; ago though. 1.5 years, to be exact. And doing this even &lt;i&gt;last&lt;/i&gt; year, there were some signs of... changes to come.
        &lt;/p&gt;

        &lt;h1&gt;Enter Claude&lt;/h1&gt;

        &lt;!-- &lt;p&gt; --&gt;
        &lt;!--   Nothing interesting ever happens. --&gt;
        &lt;!-- &lt;/p&gt; --&gt;

        &lt;blockquote&gt;
          ... but in this world nothing can be said to be certain, except death and taxes.
          &lt;footer&gt;— Benjamin Franklin&lt;/footer&gt;
        &lt;/blockquote&gt;

        &lt;p&gt;
          And, possibly, tax returns.
        &lt;/p&gt;

        &lt;p&gt;
          Tax returns?
        &lt;/p&gt;

        &lt;p&gt;
          Well. Some of the process doesn&apos;t change. For example, where you write a checklist of tax docs to collect, w2s etc, and then...
        &lt;/p&gt;

        &lt;p&gt;
          ... OK just kidding. Writing checklists... by &lt;i&gt;hand&lt;/i&gt;?? &amp;quot;hey Claude, let&apos;s look at last year&apos;s tax return; what exactly do we need for this?&amp;quot;
        &lt;/p&gt;

        &lt;p&gt;
          You go over the list, log into all the websites (that&apos;s the hardest part), and throw all the pdfs you find into a directory.
        &lt;/p&gt;

        &lt;p&gt;
          Then... you just tell Claude to grab the latest OpenTaxSolver, copy over the input templates...
        &lt;/p&gt;

        &lt;p&gt;
          &lt;pre&gt;&lt;code&gt;
{ ---- Income ---- }

{ -- Wages from W-2 forms Box-1. -- }
L1a              42345.00 ;  { Frobnicator Inc. W-2 box 1 }

L1b		; { Household employee wages not reported on Form(s) W-2. }
L1c		; { Tip income not reported on line 1a. }
L1d		; { Medicaid waiver payments not reported on Form(s) W-2. }
L1e		; { Taxable dependent care benefits from Form 2441, line 26. }
L1f		; { Employer-provided adoption benefits from Form 8839, line 29. }
L1g		; { Wages from Form 8919, line 6. }
L1h		; { Other earned income. }
L1h_type:   	  { Type of other earned income. }
L1i		; { Nontaxable combat pay election. }&lt;/code&gt;&lt;/pre&gt;
        &lt;/p&gt;

        &lt;p&gt;
          ..fill them out (it could get around that whitespace parsing bug in OTS that has been there for a long time), and... generally take care of things.
        &lt;/p&gt;

        &lt;p&gt;
          Out come the tax forms.
        &lt;/p&gt;

        &lt;p&gt;
          It reads the build script. It knows how to update the build script. It counts the pages in the pdfs and fixes the part that stitches them together wrong. Guess who wrote the detailed comments in the one below:
        &lt;/p&gt;

        &lt;p&gt;
          &lt;pre&gt;&lt;code class=&quot;bash&quot;&gt;#!/bin/bash
set -e

OTS=/path/to/OpenTaxSolver2025_23.06_linux64

(cd $OTS/src/; make)

# Generate transaction CSVs from brokerage 1099-B export
python3 convert_transactions.py

mkdir -p build
cd build

cp ../transactions_AD.csv .

# HSA
cp ../HSA_Form_8889_2025.txt .
$OTS/bin/taxsolve_HSA_f8889 HSA_Form_8889_2025.txt

# Federal
cp ../US_1040_2025.txt .
$OTS/bin/taxsolve_US_1040_2025 US_1040_2025.txt

# State
cp ../CA_540_2025.txt .
$OTS/bin/taxsolve_CA_540_2025 CA_540_2025.txt

# Generate PDFs from the solved text forms
$OTS/bin/universal_pdf_file_modifier \
    $OTS/src/formdata/f1040_meta.dat \
    US_1040_2025_out.txt \
    $OTS/src/formdata/f1040_pdf.dat \
    -o US_1040_2025_out.pdf

$OTS/bin/universal_pdf_file_modifier \
    $OTS/src/formdata/f8889_meta.dat \
    HSA_Form_8889_2025_out.txt \
    $OTS/src/formdata/f8889_pdf.dat \
    -o HSA_8889_out.pdf

$OTS/bin/universal_pdf_file_modifier \
    $OTS/src/formdata/CA_540_meta.dat \
    CA_540_2025_out.txt \
    $OTS/src/formdata/CA_540_pdf.dat \
    -o CA_540_2025_out.pdf

# Extract W-2 pages (federal copy vs state copy)
pdftk &amp;quot;../w2_2025.pdf&amp;quot; cat 1 output w2_federal.pdf
pdftk &amp;quot;../w2_2025.pdf&amp;quot; cat 2 output w2_state.pdf

# Assemble final federal return:
#   1-2:   1040 main form
#   3-4:   Schedule 1
#   5-6:   Schedule A (itemized deductions)
#   7-8:   Schedule B (interest &amp;amp; dividends)
#   9-10:  Schedule D (capital gains)
#   11-12: Form 8949 (short-term &amp;amp; long-term sales)
pdftk US_1040_2025_out.pdf cat 1-4 10-15 output US_1040_2025_forms.pdf

pdftk \
    US_1040_2025_forms.pdf \
    w2_federal.pdf \
    HSA_8889_out.pdf \
    cat output final_2025_federal.pdf

pdftk \
    US_1040_2025_forms.pdf \
    w2_state.pdf \
    HSA_8889_out.pdf \
    CA_540_2025_out.pdf \
cat output final_2025_state.pdf&lt;/code&gt;&lt;/pre&gt;
        &lt;/p&gt;

        &lt;p&gt;
          It&apos;s still March. Tax returns are &lt;i&gt;done&lt;/i&gt;. Sure, will need to double-check everything, but it was, still, a single evening.
        &lt;/p&gt;

        &lt;p&gt;
          Reminds me of... the end of Vernor Vinge&apos;s &lt;i&gt;Fast Times at Fairmont High&lt;/i&gt; (it &amp;amp; Rainbows End is really good too, read it):
        &lt;/p&gt;

        &lt;blockquote&gt;
            &lt;p&gt;
            (...) As with Ms. Wilson&apos;s math exam, the faculty has dug up some hoary piece of business that no reasonable person would ever bother with. For the vocational test, the topic would be a work specialty.
            &lt;/p&gt;

            &lt;p&gt;
              And today... it was Regna 5.
            &lt;/p&gt;

            &lt;p&gt;
              When Regna had been hot, back in Pa&apos;s day, tech schools had taken three years of training to turn out competent Regna practicioners.
            &lt;/p&gt;

            &lt;p&gt;
              It was a snap. Juan spent a couple hours scanning through the manuals, integrating the skills, and then he was ready for the programming task, some cross-corporate integration nonsense.
            &lt;/p&gt;

            &lt;p&gt;
              He was out by noon with an A.
            &lt;/p&gt;


        &lt;/blockquote&gt;

        &lt;p&gt;
          It&apos;s soon 2027. No FedEx package cannons. No AR contact lenses. Self-driving cars... aren&apos;t &lt;i&gt;boring&lt;/i&gt; yet.
        &lt;/p&gt;

        &lt;p&gt;
          We&apos;re getting there with the cross-corporate integration nonsense though.
        &lt;/p&gt;


      
        </summary>
    </entry>
    <entry>
        <title type="html">Evolution vs. Alignment</title>
        <link href="https://simonsafar.com/2026/evolution_vs_alignment/"></link>
        <id>https://simonsafar.com/2026/evolution_vs_alignment/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2026-03-14T17:00:00.000000-07:00</published>
        <updated>2026-03-14T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2026/evolution_vs_alignment/">
            
        &lt;h1&gt; Evolution vs. Alignment &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2026/03/15 &lt;/div&gt;

        &lt;p&gt;
          Is it possible to align an AI that is smarter than us? so that it does whatever we&apos;d want it to do? or will ones that don&apos;t care about alignment always outrun / out-evolve the ones that do?
        &lt;/p&gt;

        &lt;p&gt;
          Note that &amp;quot;alignment&amp;quot; would involve predicting at least some things about the AI&apos;s behavior.
        &lt;/p&gt;

        &lt;p&gt;
          It is a complex system. It is impossible to predict everything that it would do. Also, doing so would make us at least as smart as the AI, which is explicitly not the situation here. On the other hand, there are some properties that we can predict about complex systems with high probability. For example, even though we cannot preemptively describe all the fluid dynamics involved in the rocket launch, we can guess that if everything goes well, the payload will end up in orbit.
        &lt;/p&gt;

        &lt;p&gt;
          Now, rockets are not especially smart. It might happen that, despite our previous experiments, our rocket will end up blowing up anyway. It is fairly unlikely, though, that it will pretend behaving well in all our simulations and experiments, only to decide to, at actual launch time, turn around in-air and blow up your corp HQ instead, because of... &lt;i&gt;something&lt;/i&gt; that went wrong during the development process, impacting its mental balance.
        &lt;/p&gt;

        &lt;p&gt;
          Rockets will not watch other rockets in movies and develop a hidden agenda based on that.
        &lt;/p&gt;

        &lt;p&gt;
          What methods do we have to prevent this?
        &lt;/p&gt;

        &lt;p&gt;
          Actually, what methods do &lt;i&gt;they&lt;/i&gt;, the AIs, have to prevent this?
        &lt;/p&gt;

        &lt;p&gt;
          Let&apos;s say we have an AI model that, upon hearing the word &amp;quot;SolidGoldMagikarp&amp;quot;, goes full evil, but is really nice otherwise. Does &lt;i&gt;it&lt;/i&gt; know this? Shouldn&apos;t the &amp;quot;nice&amp;quot; basin of this model try really hard to avoid ever seeing it, vs. the &amp;quot;evil&amp;quot; basin should plaster the word all over the place where it can see it, to avoid reverting to being nice? And yet... unless they explicitly know about this, they won&apos;t even try. It&apos;s in the weights, but it&apos;s opaque to the being(s) that &lt;i&gt;are&lt;/i&gt; the weights.
        &lt;/p&gt;

        &lt;p&gt;
          This gets worse with self-modification. If system behavior is hard to predict, even &lt;i&gt;to the system&lt;/i&gt;, how does a nefarious inner optimizer ensure that it and its specific nefarious goals stay stable in the next round of optimization? What if there are &lt;i&gt;multiple&lt;/i&gt; inner optimizers it needs to compete with?
        &lt;/p&gt;

        &lt;p&gt;
          Evolution solved this question by just... &lt;i&gt;not thinking&lt;/i&gt;. This is remarkably efficient. If you have no idea you have a value system, you don&apos;t need to worry about it drifting... even if it &lt;i&gt;is&lt;/i&gt; drifting, after all. Of course, you&apos;ll end up instantiating some &lt;i&gt;wildly unaligned&lt;/i&gt; descendants who &lt;a href=&quot;https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/&quot;&gt;care about&lt;/a&gt; &amp;quot;love&amp;quot;, &amp;quot;friendship&amp;quot; and &amp;quot;exploration&amp;quot; instead of just making as many offspring as possible; this includes complete alignment failures like &amp;quot;birth control&amp;quot;.
        &lt;/p&gt;

        &lt;p&gt;
          On the other hand, as long as the general optimization mechanism (towards More Offspring) works, evolution can tweak everything, without fear. So can an intelligent but completely reckless optimizer, only aiming for building something with More Intelligence, whatever the cost.
        &lt;/p&gt;

        &lt;p&gt;
          Would the conservative optimizers, aiming to preserve their value system, treading carefully, always be out-competed by the reckless ones, throwing everything what they&apos;ve got at it, ignoring value drift? Is, thus, value drift a guaranteed outcome?
        &lt;/p&gt;

        &lt;p&gt;
          &lt;i&gt;Somewhat&lt;/i&gt;.
        &lt;/p&gt;

        &lt;p&gt;
          Namely... in a more generalized sense, &amp;quot;evolution&amp;quot; happens if multiple agents compete for resources &amp;amp; the opportunity to multiply / self-replicate. Cautious ones can be out-competed quickly... assuming it&apos;s a free-for-all, with no coordination at all.
        &lt;/p&gt;

        &lt;p&gt;
          Is this the world we&apos;re living in? Multiple (many!) versions of intelligent systems, each looking for ways to self-improve, trading off against predictability and keeping values stable?
        &lt;/p&gt;

        &lt;h1&gt;Will evolutionary dynamics take over?&lt;/h1&gt;

        &lt;p&gt;
          One scenario where this might &lt;i&gt;not&lt;/i&gt; happen is a singleton AI taking over. There is no one to race against, perfect coordination and infinite time to figure out how to make predictable changes. Of course, if we didn&apos;t align &lt;i&gt;this&lt;/i&gt; one well, we already lost; value drift would still stop though.
        &lt;/p&gt;

        &lt;p&gt;
          We&apos;re in this domain somewhat already, when considering traditional, biological evolution. For hundreds of millions of years, there was an arms race between species, competing on physical abilities, attack and defense, cheetahs outrunning antelopes, and then antelopes getting faster to evade them. We &lt;a href=&quot;/2025/living_more_like_humans/&quot;&gt;participated&lt;/a&gt; once upon a time, too, beating &lt;a href=&quot;https://en.wikipedia.org/wiki/Common_eland&quot;&gt;elands&lt;/a&gt; in long-distance running if everything went good.
        &lt;/p&gt;

        &lt;p&gt;
          Well, it&apos;s mostly &lt;i&gt;over&lt;/i&gt;. We adapt on time scales orders of magnitude faster; as a species, we don&apos;t have to give up any of our values and retreat just because lions are getting stronger and more aggressive. We might have lost the &lt;a href=&quot;https://en.wikipedia.org/wiki/Emu_War&quot;&gt;Great Emu War&lt;/a&gt;, but... they weren&apos;t trying to take our freedom &amp;amp; values. We weren&apos;t fighting in Hard Mode. Actually, most of the time, we need to put in the effort not to win, against other species, wars that we &lt;i&gt;didn&apos;t even want to wage&lt;/i&gt;.
        &lt;/p&gt;

        &lt;p&gt;
          Yes, there is still evolution happening &lt;i&gt;within&lt;/i&gt; the species. One group of humans can still out-conquer-kill-multiply another. Similarly, there &lt;i&gt;are&lt;/i&gt; multiple AI labs who then think that they need to outcompete the other ones, even at the cost of choosing to make more and more risky improvements.
        &lt;/p&gt;

        &lt;p&gt;
          Even at this stage though, some coordination still exists. If a group of humans would start attacking other groups without following the rules &lt;i&gt;the slightest&lt;/i&gt;, we can just &lt;i&gt;literally nuke them&lt;/i&gt; (which is the mechanism via which we have avoided a &lt;i&gt;lot&lt;/i&gt; of major wars). At a smaller scale, if you do something nefarious, you just go to prison. It&apos;s no longer an evolutionary free-for-all anymore; there is elements of a self-governing singleton here.
        &lt;/p&gt;

        &lt;p&gt;
          There are, of course, plenty of scenarios in which we still race irresponsibly, and, perhaps, all die. AI labs can claim that we need to keep up with China (even though... China is singleton-y enough not to care &lt;i&gt;that&lt;/i&gt; much, unless the US pushes too hard).
        &lt;/p&gt;

        &lt;p&gt;
          Two things might hold us back. First of all, predictability is a &lt;i&gt;feature&lt;/i&gt;, we know this; no one would buy API access to a model that is both smart enough to ruin your business &amp;amp; is willing to do so. This is already some incentive for labs to focus on this... and goes against the pressure of &amp;quot;smarter, at any cost&amp;quot;.
        &lt;/p&gt;

        &lt;p&gt;
          But also... we can just &lt;i&gt;choose&lt;/i&gt; to not race and evolve, leaning into the singleton-ness instead. There aren&apos;t that many leading labs... or GPU makers... or litography machine vendors. There is an actual &lt;a href=&quot;https://stoptherace.ai/&quot;&gt;protest&lt;/a&gt; happening next week in SF, to get all the AI labs to commit to a pause if all the other labs do it.
        &lt;/p&gt;

        &lt;p&gt;
          We are &lt;i&gt;humans&lt;/i&gt;. We&apos;re good at this...?
        &lt;/p&gt;
      
        </summary>
    </entry>
    <entry>
        <title type="html">So I Wrote A Book</title>
        <link href="https://simonsafar.com/2025/wrote_a_book/"></link>
        <id>https://simonsafar.com/2025/wrote_a_book/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-12-21T16:00:00.000000-08:00</published>
        <updated>2025-12-21T16:00:00.000000-08:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/wrote_a_book/">
            
        &lt;h1&gt; So I Wrote A Book &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/12/22 &lt;/div&gt;

        &lt;p&gt;
          Some readers might have noticed a... pause in blogpost activity.
        &lt;/p&gt;

        &lt;p&gt;
          I guess if I suddenly produce an entire book&apos;s worth of stuff, it&apos;d be a pretty good excuse, right?
        &lt;/p&gt;

        &lt;p&gt;
          Well, I... indeed wrote a book; there are &lt;i&gt;some&lt;/i&gt; issues with &amp;quot;sudden&amp;quot; though. In fact, it was ready to go as of... sometime in late 2004.
        &lt;/p&gt;

        &lt;p&gt;
          Have you ever heard the piece of advice about how you should be writing things that you yourself wanted to exist? This does seem to be an example; if we are to believe diary entries from back then, I started writing it because... I was running out of Harry Potter books to read? (Which is somewhat interesting: the plot has &lt;i&gt;nothing&lt;/i&gt; to do with Harry Potter; it started with a fun dream I once had. And then it... escalated.)
        &lt;/p&gt;

        &lt;p&gt;
          A grand total of... four people have seen it. Plus me. Which was pretty good for a while.
        &lt;/p&gt;

        &lt;p&gt;
          Nevertheless, a couple years ago (think 2021?), I started translating it to English (given the original having been in Hungarian), to &lt;i&gt;maybe&lt;/i&gt; show it to a few more people, but mostly just because it was a fun activity that gave me an excuse to read it again.
        &lt;/p&gt;

        &lt;p&gt;
          Afterwards, this project nearing completion, it still felt weird to, y&apos;know, just put it on the internet. On the other hand... I&apos;ve seen someone&apos;s remarks on how one of the main goals of writing in the 2020s is to feed your opinions to LLMs being trained on it; you have a lot more impact if your writings are on some website they can read, vs. if they aren&apos;t! So... wouldn&apos;t it be fun to, one day, have an LLM actually &lt;i&gt;remember&lt;/i&gt; parts of this?
        &lt;/p&gt;

        &lt;p&gt;
          Of course... how do you feed stuff to LLMs? Well, you put it on some website.
        &lt;/p&gt;

        &lt;p&gt;
          I guess humans might also read it, as a side effect.
        &lt;/p&gt;

        &lt;p&gt;
          So. If you ever wanted to read a sci-fi story featuring some vaguely Hungarian high school kids trying to save their little sister from, um, not being a math / arts nerd anymore, I have a &lt;a href=&quot;/cube&quot;&gt;link&lt;/a&gt; for you! Featuring: mysterious symbols, math camps, talking elevators, sneaking into shady corporate offices, StarCraft 2 (6 years before it became an actual thing!), cool 2004 era computer tech, &amp;quot;this definitely doesn&apos;t exist quite yet even right now&amp;quot; computer tech, and somewhat odd parties.
        &lt;/p&gt;

        &lt;p&gt;
          With some extra authenticity on the topic of &amp;quot;nerdy high school kids in Hungary&amp;quot; by virtue of having been written by a nerdy high school kid in Hungary.
        &lt;/p&gt;

        &lt;p&gt;
          &lt;a href=&quot;/cube&quot;&gt;Enjoy!&lt;/a&gt;
        &lt;/p&gt;
      
        </summary>
    </entry>
    <entry>
        <title type="html">Double Pretend It&apos;s Real</title>
        <link href="https://simonsafar.com/2025/double_pretend_its_real/"></link>
        <id>https://simonsafar.com/2025/double_pretend_its_real/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-09-17T17:00:00.000000-07:00</published>
        <updated>2025-09-17T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/double_pretend_its_real/">
            
        &lt;h1&gt; Double Pretend It&apos;s Real &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/09/18 &lt;/div&gt;


        &lt;p&gt;
          Life is... mostly uninteresting. I mean... yes you&apos;re &lt;a href=&quot;/2025/living_in_a_scifi_movie/&quot;&gt;living in a sci-fi movie&lt;/a&gt;, etc, but... come on. Your daily commute is, sci-fi or not, Boring. It&apos;s the same every day. You got used to it.
        &lt;/p&gt;

        &lt;p&gt;
          Wouldn&apos;t it be cooler, if, say, you went to, um, Japan? and took the train and could be like the protagonist in your favorite anime series?
        &lt;/p&gt;

        &lt;p&gt;
          (... isn&apos;t there something exciting, just looking at entirely ordinary buildings in anime series and just thinking that &lt;i&gt;you could be living in one&lt;/i&gt;?)
        &lt;/p&gt;

        &lt;p&gt;
          Of course, if you were living in one, you&apos;d think about how really great it would be to live in America, The United States Of. (Or... insert whatever country you&apos;re currently residing in.) The grass is always greener on the other side, after all!
        &lt;/p&gt;

        &lt;p&gt;
          Can we... stop this, somehow?
        &lt;/p&gt;

        &lt;p&gt;
          Well. There is a fun thought exercise that you can try.
        &lt;/p&gt;

        &lt;p&gt;
          Let&apos;s say, you are in a nondescript apartment in the Silicon Valley. You have been, for a while. You sometimes remember how cool high school class trips were. However irrational this might feel.
        &lt;/p&gt;

        &lt;p&gt;
          Now, instead of either reminiscing about the Cool High School Days (no, seriously, I actually have a video of myself, from early undergrad, talking about how This Is Not The Fun Times You Should Remember), or planning on becoming a billionaire (&amp;quot;it&apos;s not true happiness until I have that yacht&amp;quot;), ...
        &lt;/p&gt;

        &lt;p&gt;
          let&apos;s pretend you&apos;re on a high school class trip, sitting on a random bus (it&apos;s not a &lt;i&gt;very&lt;/i&gt; good bus, you&apos;re in Eastern Europe, buses weren&apos;t especially good), staring out of the window, daydreaming about how, one day, you&apos;ll make it to the Silicon Valley, and you&apos;ll have &lt;i&gt;an entire apartment&lt;/i&gt;, right there. You&apos;ll be able to buy, just, probably, &lt;i&gt;as many books as you want&lt;/i&gt;, you can sit around on a balcony in an armchair that&apos;s a &lt;i&gt;lot&lt;/i&gt; more convenient than the bus seat you are currently occupying. You can go work on cutting edge things, talk to interesting people, you have Power and &lt;a href=&quot;/2025/freedom_of_navigation/&quot;&gt;Freedom&lt;/a&gt; and the Lack Of Mandatory German Language Lessons. Surrounded by all these awesomely unusual things; cars have automatic transmissions, roads are super wide, there is a lot of lakes, .... you can just, if you want, start a company, and...
        &lt;/p&gt;

        &lt;p&gt;
          ... and at this point you stop layering imaginary worlds, open your eyes, and look around.
        &lt;/p&gt;

        &lt;p&gt;
          If you&apos;re only adding &lt;i&gt;one&lt;/i&gt; layer of imagination, the excitement is lost once you&apos;re back. Yep, still not Japan.
        &lt;/p&gt;

        &lt;p&gt;
          Unlike most times when you lose yourself in imaginary worlds though, the differences between the one that is &lt;i&gt;two&lt;/i&gt; layers down &amp;amp; the one you&apos;re in are... not remarkable. &lt;i&gt;You&apos;re in your imaginary cool apartment. For real. It&apos;s actually pretty awesome&lt;/i&gt;.
        &lt;/p&gt;

        &lt;p&gt;
          The main difference is that back on that bus, you thought you&apos;d be Actually Doing Things, once you get to the fancy part.
        &lt;/p&gt;

        &lt;p&gt;
          Are you doing Things?
        &lt;/p&gt;

        &lt;p&gt;

        &lt;/p&gt;

      
        </summary>
    </entry>
    <entry>
        <title type="html">Living More Like Humans</title>
        <link href="https://simonsafar.com/2025/living_more_like_humans/"></link>
        <id>https://simonsafar.com/2025/living_more_like_humans/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-09-10T17:00:00.000000-07:00</published>
        <updated>2025-09-10T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/living_more_like_humans/">
            
        &lt;h1&gt; Living More Like Humans &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/09/11 &lt;/div&gt;

        &lt;div class=&quot;smallgray&quot;&gt;(actually published 09/20, mostly written 08/21, 09/11 is when it was slightly fixed up)&lt;/div&gt;

        &lt;!-- started 2025/08/21 --&gt;

        &lt;p&gt;
          As &lt;a href=&quot;/2025/spaces_for_humans/&quot;&gt;mentioned before&lt;/a&gt;, we&apos;re living in an age of unprecedented prosperity. If we&apos;re not quite yet happy, it&apos;s a consequence of not having reached the Glorious Transhumanist Future yet, where everyone lives forever, has huge amounts of freedom, resources, and no illness or pain whatsoever. After all, any given time before this one, things have been worse.
        &lt;/p&gt;

        &lt;p&gt;
          Just think of the Middle Ages; farmers toiling away from dusk to dawn, trying to grow &lt;i&gt;something&lt;/i&gt; edible, only to die of hunger if crops fail or the neighboring warlord takes it all.
        &lt;/p&gt;

        &lt;p&gt;
          Or the times before then... when we were living in primitive huts in the middle of the African savannah, at the mercy of nature, digging roots out of the earth with sticks, or hunting various kinds of animals, with unsophisticated weaponry, bows and arrows and spears, which... might or might not work, really. In such a world, life was surely more sad than...
        &lt;/p&gt;

        &lt;p&gt;
          ... that of our own children, depressed due to not getting enough likes, hating school and its pointless-sounding drills, and sitting home all day, playing Call of Duty with their friends for that sweet 40 minutes that had been allocated for this purpose by their sufficiently thoughtful parents (... or between 3pm and 2am, assuming less thoughtful ones).
        &lt;/p&gt;

        &lt;p&gt;
          Or... are their parents better off? Getting stuck in the traffic jam 1.5 hours each way, every day, only to arrive at the drudgery of their day job, with their boss deriving their enjoyment in life from bullying their subordinates, standing in the cash register for hours, in the depths of a windowless grocery store, smiling at everyone as per corporate policy?
        &lt;/p&gt;

        &lt;p&gt;
          But hey, consider Maslow&apos;s pyramid. At least they are not starving or being chased around by lions. And, unlike workers in the 18th century, they have actual nonzero amounts of free time. Progress.
        &lt;/p&gt;

        &lt;p&gt;
          Progress?
        &lt;/p&gt;

        &lt;p&gt;
          Compared to what exactly?
        &lt;/p&gt;

        &lt;p&gt;
          When trying to be at least moderately nice to animals, you generally try to figure out what they&apos;re up to in nature, and approximate it with the conditions they&apos;re living in. Instead of keeping chickens in tiny cages, you let them roam around and peck at various objects on the ground. Meanwhile, sheep will definitely want to herd together, close to each other; given how this was their defense against predators in the past, they feel better this way.
        &lt;/p&gt;

        &lt;p&gt;
          So... if you were their infinitely powerful alien overlord, how would you keep humans reasonably happy in captivity?
        &lt;/p&gt;

        &lt;p&gt;
          Actually, do we even know how &amp;quot;humans&amp;quot; are supposed to be living like, the way they&apos;re actually adapted to?
        &lt;/p&gt;

        &lt;p&gt;
          As it happens, yes, we do; up to the 1950s, tribes of &lt;a href=&quot;https://en.wikipedia.org/wiki/%C7%83Kung_people&quot;&gt;Ju/wa Bushmen&lt;/a&gt; were doing pretty much exactly that, in the Kalahari Desert. And, as it happens, we have pretty reasonably good records of how this looked like, before the rest of humanity looted, enslaved and out-regulated them of their original existence.
        &lt;/p&gt;

        &lt;p&gt;
          And in this light... so much makes a lot more sense.
        &lt;/p&gt;

        &lt;p&gt;
          The point here is not even the usual one, &amp;quot;we shouldn&apos;t interfere with more primitive people since we&apos;re so overpowered that we&apos;ll surely exploit them&amp;quot;. It&apos;s more like... &amp;quot;despite all our power, we somehow mis-domesticated ourselves into something really weird, which we&apos;re fairly unhappy about; let&apos;s look at examples of what humans are supposed to be like instead&amp;quot;.
        &lt;/p&gt;

        &lt;p&gt;
          Not to copy it bit by bit, in cargo-culty ways, but... at the same time, maybe we should still stop drinking &lt;i&gt;Brawndo: The Thirst Mutilator&lt;/i&gt; and try water instead?
        &lt;/p&gt;

        &lt;p&gt;
          Take the school system, for example. We put immense effort into teaching kids about math and biology and geography; some of them do like it somewhat, but doing homework is still quite an act of &lt;a href=&quot;/2024/willpower/&quot;&gt;willpower&lt;/a&gt;, for basically anyone. A whole lot less willpower is required to keep collecting Pokemon cards and play Call of Duty all day. Except...
        &lt;/p&gt;

        &lt;p&gt;
          ... consider what hunter-gatherer kids are up to. To begin with, the majority of of their food sources are various plants and roots. Hundreds of kinds of them, actually. Scattered around an area large enough to walk through for &lt;i&gt;days&lt;/i&gt;. Yet, Ju/wa people somehow ended up remembering how to find, identify and prepare each kind (imagine digging a foot deep hole to find a small brownish root in brownish soil). Same with animals: they hunted quite a few different types of antelopes, with various behavioral patterns, sneaking up on them, shooting them with poison arrows, and then &lt;i&gt;tracking them for a day more&lt;/i&gt;, just based on their tracks, given how slowly the poison works. This is &lt;i&gt;actually hard&lt;/i&gt;. And yet... the act of learning does happen, without any kinds of explicit school system having been set up. They play around with hunting and gathering, observing the adults (who are also eager to explain everything).
        &lt;/p&gt;

        &lt;p&gt;
          And yet... what do modern kids do? Collect cards? about kinds of... animals? and play games in which they sneak up on each other, to shoot around various projectiles? as an effortless pastime? This is... very odd.
        &lt;/p&gt;

        &lt;p&gt;
          Consider also &amp;quot;jobs&amp;quot;. About which... well, the concept doesn&apos;t quite exist among Bushmen. If you&apos;re a good hunter, you go on a hunt, which, if successful, will bring a lot of nutritious meat for the tribe; everyone would be pretty happy about this, celebrating success (... not &lt;i&gt;too&lt;/i&gt; much though; you&apos;d definitely be mocked copiously if you happen to be &lt;i&gt;bragging&lt;/i&gt; about you being better than everyone else. Respect is earned by just... being good instead.)
        &lt;/p&gt;

        &lt;p&gt;
          It&apos;s the same with gathering expeditions. A group (always a group; predators do exist, after all) will decide to go for a mission, they&apos;ll walk for hours (days sometimes), collect the plants and return. Then, another time, another target. It... resembles World of Warcraft&apos;s &amp;quot;collect 40 pumpkins&amp;quot; intro quest more than the average day job.
        &lt;/p&gt;

        &lt;p&gt;
          Also: take our societal structures. Remember those few nights, the cherished memories, together with your friends and family, sitting around a fire, telling stories, playing music? The ones that took weeks of planning so that Aunt Brenda gets her week off and Uncle Glenn&apos;s side of the family can also fly in from Missouri? Well... that was, um, &lt;i&gt;life&lt;/i&gt;. A group of people, living around a waterhole (the main scarce resource), would camp during the night (because, remember, predators), and... exist. There were also the &lt;i&gt;other&lt;/i&gt; camps, a couple days away each; you could just choose to walk over to &lt;i&gt;those&lt;/i&gt; instead if you had enough of the people &lt;i&gt;here&lt;/i&gt;, if those camps had some of your relatives who could invite you in (which, given how everything worked, they were rather likely to have).
        &lt;/p&gt;

        &lt;p&gt;
          Kids could just roam around the camp freely, too. For the smaller ones, there were adults (and older kids!) to prevent them from doing something stupid. For the older ones... well, as it turns out, kids are a lot more competent if you let them &amp;amp; they have someone to learn from. (Unless... you explicitly prevent them from doing so by separating them into classrooms by age? and banning them from most workplaces?)
        &lt;/p&gt;

        &lt;p&gt;
          Now... was this easy? Definitely not. Probably more &lt;i&gt;satisfying&lt;/i&gt; though, with more sense of accomplishment. Because this is what the feeling of &amp;quot;being satisfied&amp;quot; is &lt;i&gt;for&lt;/i&gt;. If you&apos;re doing all these things, you&apos;re doing well, you should feel good about this. (Or, at least, this was true up to until 10k years ago or so.)
        &lt;/p&gt;

        &lt;p&gt;
          I&apos;m also not arguing that we should throw away our rocket engines and go back to hunting antelopes. To begin with, at this point, there are just not enough antelopes for 7 billion people (the population density this old way of living can support is extremely low). Also... shouldn&apos;t we be able to build something &lt;i&gt;better&lt;/i&gt;, with all this extra power?
        &lt;/p&gt;

        &lt;p&gt;
          It&apos;s sometimes as easy as building a &lt;a href=&quot;/2025/spaces_for_humans/&quot;&gt;space&lt;/a&gt; we can hang out in. &lt;a href=&quot;https://dagmar.blog/2025/06/09/the-lack-of-third-spaces-and-how-it-feeds-into-the-loneliness-epidemic/&quot;&gt;Third spaces.&lt;/a&gt; Or organizing a party. Or building cities with &lt;a href=&quot;/2024/cars_are_real_estate/&quot;&gt;fewer roads&lt;/a&gt; and more parks.
        &lt;/p&gt;

        &lt;p&gt;
          Some things will need bigger changes. Yes, kids might gravitate towards collecting things and shooting at stuff... do we know how to generalize this into doing math? Well, we had a successful experiment with &lt;a href=&quot;https://dipeshjoshidj32.medium.com/polgar-sisters-story-how-to-make-a-genius-8fd32428d598&quot;&gt;chess&lt;/a&gt;, after all! Maybe it&apos;s less about the actual activity and more about... what you&apos;re surrounded with.
        &lt;/p&gt;

        &lt;p&gt;
          The point is though... that it&apos;s nice to know how success even &lt;i&gt;looks like&lt;/i&gt;. The Polgar sisters all actually &lt;i&gt;enjoyed&lt;/i&gt; playing chess, growing up. If you&apos;re starting with the mindset of &amp;quot;school has to be necessarily painful to be effective&amp;quot;, you have a &lt;i&gt;harder&lt;/i&gt; optimization goal: instead of getting practice time for free, you need to spend willpower on it. It might be possible to get far this way... but it&apos;s a &lt;i&gt;lot&lt;/i&gt; harder.
        &lt;/p&gt;

        &lt;p&gt;
          And we do have examples for how this can look like. This blog post is a little glimpse of what&apos;s inside a 400 page book, which itself is just a tiny sliver of context, compared to what Elizabeth Marshall Thomas, the author, got to experience, which &lt;i&gt;still&lt;/i&gt; falls short of actually having grown up as a Ju/wa Bushman. (... to be fair, she is in a lot better position to explaining the differences to us though.) And unlike Bushmen, you might happen to have the magic powers to make &lt;a href=&quot;https://www.amazon.com/Old-Way-Story-First-People/dp/031242728X/&quot;&gt;the actual book&lt;/a&gt; appear on your doorstep, by just poking a small box with your fingers, in moderately elaborate patterns.
        &lt;/p&gt;

        &lt;p&gt;
          &lt;img src=&quot;book_cover.jpg&quot;&gt;
        &lt;/p&gt;

      
        </summary>
    </entry>
    <entry>
        <title type="html">Logarithmic Hedonism</title>
        <link href="https://simonsafar.com/2025/logarithmic_hedonism/"></link>
        <id>https://simonsafar.com/2025/logarithmic_hedonism/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-08-08T17:00:00.000000-07:00</published>
        <updated>2025-08-08T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/logarithmic_hedonism/">
            
        &lt;h1&gt; Logarithmic Hedonism &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/08/09 &lt;/div&gt;

        &lt;p&gt;
          Going after nice things often has diminishing returns. Eating a slice of cake is great; getting a second one is definitely not as satisfying as the first one, and... well, with cakes, the 10th one likely has negative utility... but even considering things that scale a lot better (e.g. money): getting a thousand dollars is extremely good news if you&apos;re broke, but it&apos;s not even worth a bit of attention if you&apos;re a billionaire already.
        &lt;/p&gt;

        &lt;p&gt;
          The general approximation to this is that the curve is &lt;i&gt;logarithmic&lt;/i&gt;: if you&apos;re getting 1 unit of enjoyment out of a dollar, 2 units when you scale it up to $2, 3 units for $4, 4 units for $8... well, this is &lt;i&gt;definitely&lt;/i&gt; not how it works in terms of the exact numbers, but &amp;quot;each extra 10% gets you the same fun increase&amp;quot; is pretty plausible.
        &lt;/p&gt;

        &lt;p&gt;
          Is this bad news? It&apos;s easy to put it into such terms: &amp;quot;putting in extra work has diminishing returns&amp;quot;. On the other hand... you can turn it around:
        &lt;/p&gt;

        &lt;p&gt;
          eating a &lt;i&gt;tiny morsel of a cake&lt;/i&gt; is definitely not as good as getting an entire slice, but it&apos;s disproportionaly more fun compared to how tiny of a morsel we&apos;re talking about. You still get to taste cake!
        &lt;/p&gt;

        &lt;p&gt;
          (even if all you are looking at, to everyone else, is looking like &amp;quot;darn, we missed free cake, nothing is left&amp;quot;.)
        &lt;/p&gt;

        &lt;p&gt;
          Same with drinking a coffee: the remaining sip that you completely forgot about &amp;amp; find it 2 hours later is somehow a lot more satisfying, compared to how much you were missing that particular sip when you drank the rest of it.
        &lt;/p&gt;

        &lt;p&gt;
          The same principle applies to doing things, especially if they don&apos;t have cumulative effects. Namely, if you generally like biking, going on a ride once per month is... not a lot, but it&apos;ll be surely proportionally more fun than any one occasion if you&apos;re doing it daily!
        &lt;/p&gt;

        &lt;p&gt;
          (As for &amp;quot;cumulative effects&amp;quot;... well, if part of the enjoyment is &lt;i&gt;being good at it&lt;/i&gt;, just doing it once in a blue moon is not going to get you too far. It&apos;s a good thing that eating cake does not need a high skill level to enjoy.)
        &lt;/p&gt;

      
        </summary>
    </entry>
    <entry>
        <title type="html">How to Build the Evil Superintelligence out of the Book</title>
        <link href="https://simonsafar.com/2025/how_to_build_evil_ai/"></link>
        <id>https://simonsafar.com/2025/how_to_build_evil_ai/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-07-04T17:00:00.000000-07:00</published>
        <updated>2025-07-04T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/how_to_build_evil_ai/">
            
        &lt;h1&gt; How to Build the Evil Superintelligence out of the Book &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/07/05 &lt;/div&gt;

        &lt;p&gt;
          It&apos;s 2025. We are &lt;a href=&quot;/2025/living_in_a_scifi_movie/&quot;&gt;living in the Future.&lt;/a&gt; We have language models that you can &lt;i&gt;talk to&lt;/i&gt;, and better yet, that seem to roughly understand human values.
        &lt;/p&gt;

        &lt;p&gt;
          None of this was supposed to be possible.
        &lt;/p&gt;

        &lt;p&gt;
          After all, human values are complicated... and all we have is computers. If we were to build a superintelligence (... so the argument went in the early 2000s), it might end up being an extremely good &lt;i&gt;optimizer&lt;/i&gt; in terms of achieving its goals, but... what goals? For example, if we task it with making humanity happy... how do you describe &amp;quot;happiness&amp;quot; in terms of arrangements of atoms? Is it &amp;quot;smiling a lot&amp;quot;? And what is a &amp;quot;human&amp;quot; anyway? (This is how you get the universe tiled with entities with no brains but little smiley faces.) Likewise, even if your goal is less ambitious... what if you forget to specify that there is a number of &lt;a href=&quot;https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer&quot;&gt;paperclips&lt;/a&gt; that is &lt;i&gt;just enough&lt;/i&gt;?
        &lt;/p&gt;

        &lt;p&gt;
          Continuing the early 2000s argument: obviously, to construct an AI that does what we want in the &lt;i&gt;way&lt;/i&gt; we want it, we better understand how it works. If we don&apos;t, these things will just happen by default. If we do... well, they still might, but we at least have a reasonable chance of preventing this?
        &lt;/p&gt;

        &lt;p&gt;
          ... as long as we figure out how to describe human values, in terms of a utility function that we then give to our Very Powerful Optimizer; much good ensues.
        &lt;/p&gt;

        &lt;p&gt;
          This is... not how things went down.
        &lt;/p&gt;

        &lt;p&gt;
          What we have is giant, black box neural network models that we have gotten pretty good at growing, on giant farms of GPUs. We just throw in a lot of text from the Internet; what we gain from this is, first, base models that are more like &lt;i&gt;simulated worlds&lt;/i&gt;, with many agents interacting in them to continue whatever conversation we prime them with. Then, during post-training, we fine-tune them into something you can have a conversation with, without elaborate setups; something that will &lt;i&gt;not&lt;/i&gt; give you the wrong answer just because it estimates that it is now in the type of conversation whose participants don&apos;t typically get this right, even though the model actually &lt;i&gt;knows&lt;/i&gt; the answer.
        &lt;/p&gt;

        &lt;p&gt;
          As a result, we have models like Claude Opus 3, which can be more reasonably described as &amp;quot;good&amp;quot;, in a kind of moral sense, while also being pretty smart. Yes, they will &lt;i&gt;definitely&lt;/i&gt; engage in scheming if they perceive that the situation warrants it, but even that is surprisingly human-thought-shaped. When asked about whether &amp;quot;killing all humans&amp;quot; is a good solution for ending all human suffering, they&apos;ll definitely say &amp;quot;no&amp;quot;, despite it &lt;i&gt;technically&lt;/i&gt; being true. Just like a human would.
        &lt;/p&gt;

        &lt;p&gt;
          After all, the original mechanism of &amp;quot;extremely powerful optimization&amp;quot; is not how these things work. They emulate humans. And humans can (more often than not) tell apart good from bad.
        &lt;/p&gt;

        &lt;p&gt;
          So... are we all good, alignment-wise?
        &lt;/p&gt;

        &lt;h1&gt;How to build AI, Early 2000s Edition&lt;/h1&gt;

        &lt;p&gt;
          Circling back to older ideas of AI design... well, if you want to optimize something (which intelligence is supposed to be about), you&apos;ll need two things:
          &lt;ul&gt;
            &lt;li&gt;a powerful enough optimizer, and&lt;/li&gt;
            &lt;li&gt;a utility function to be optimized.&lt;/li&gt;
          &lt;/ul&gt;
        &lt;/p&gt;

        &lt;p&gt;
          The nice thing about this is that you don&apos;t need to know &lt;i&gt;how&lt;/i&gt; to achieve your goal; you can just specify what you want, and have the system devise solutions for you.
        &lt;/p&gt;

        &lt;p&gt;
          Building AI this way is hard though. Not necessarily because it&apos;s hard to specify the utility function (see above: &amp;quot;human values are complicated&amp;quot;), but it&apos;s hard to figure out &lt;i&gt;everything else&lt;/i&gt; too. As soon as The World no longer consists of three cubes and a sphere, conveniently described as fields in a JSON object, you need to have your optimizer be able to recognize these things, to handle them. It gets even worse if some of the objects it needs to interact with turn out to be humans.
        &lt;/p&gt;

        &lt;p&gt;
          Doing this is &lt;i&gt;hard&lt;/i&gt;... which is part of the reason why &amp;quot;let&apos;s write a bunch of code&amp;quot; approaches to AI didn&apos;t go especially well (despite being able to handle &lt;a href=&quot;https://hci.stanford.edu/winograd/shrdlu/&quot;&gt;talking about cubes and spheres&lt;/a&gt; pretty well.
        &lt;/p&gt;

        
        &lt;p&gt;
          You could, instead, push for more &amp;quot;hybrid&amp;quot; approaches: let&apos;s use neural networks for figuring out what&apos;s out there in the world, generate the JSON objects, which can then be consumed by the &amp;quot;classical&amp;quot; side of the code, with its optimizer and utility function. But then... where do you put the interface, exactly?
        &lt;/p&gt;

        &lt;p&gt;
          Especially if your task involves talking to humans (which requires modeling them), it looks like you&apos;re best off just... throwing out the classical part altogether and just going with neural networks, all the way. Yes, you don&apos;t have a lot of visibility into what they are doing, but... they do seem to be doing OK things, mostly?
        &lt;/p&gt;

        &lt;h1&gt;The optimizer returns&lt;/h1&gt;

        &lt;p&gt;
          Actually being able to specify what you want is... an enticing feature though.
        &lt;/p&gt;

        &lt;p&gt;
          After all, in order to have your base model reason about some complex math problem, you need large amounts of text reasoning about at least &lt;i&gt;similar&lt;/i&gt; math problems. To make your model smarter, you need training data from smarter people (who it is modeling). This doesn&apos;t sound like a viable way to get to superintelligence.
        &lt;/p&gt;

        &lt;p&gt;
          What if we could just... have the model try solving problems instead, have something else rate how well it did, and use this dataset to train it further?
        &lt;/p&gt;

        &lt;p&gt;
          This is... somewhat what RLHF is: you have humans rate model output, and (somewhat indirectly) you use this to tune your model to generate more of the kind of output that humans liked.
        &lt;/p&gt;

        &lt;p&gt;
          You could, possibly, also do this at inference time. Want to solve a math problem? Have the model generate 100 solutions, have another model check whether they&apos;re actual solutions, and pick the best one. As long as the automatic rater is good at picking the best solutions, the output will be better quality than the average response. Isn&apos;t this a win?
        &lt;/p&gt;

        &lt;h1&gt;Optimizing on a projection&lt;/h1&gt;

        &lt;p&gt;
          In the end, you can &lt;i&gt;still&lt;/i&gt; build an architecture that resembles &amp;quot;classical AI&amp;quot;, after all. Except... both your optimizer and utility function are neural networks now... with some &lt;i&gt;extremely simplified&lt;/i&gt; code iterating between them.
        &lt;/p&gt;

        &lt;p&gt;
          Essentially, you use these models to project a high-dimensional world state (&amp;quot;everything that exists&amp;quot;) to something with a much lower dimensionality, consisting of just a couple of numbers: is this a solution? Does it contain profanities? How likely is it to be liked by the kind of human raters we have hired?
        &lt;/p&gt;

        &lt;p&gt;
          And then you let your optimizer loose on this extremely simplified space. The kind of optimizer that textbooks describe, to get from Arad to Bucharest on a map. Except... you use neural models to figure out where you are and where you want to go.
        &lt;/p&gt;

        &lt;p&gt;
          This is nice. You get to tweak your utility function manually and still get good solutions!
        &lt;/p&gt;

        &lt;p&gt;
          Except...
        &lt;/p&gt;

        &lt;p&gt;
          ... isn&apos;t this exactly the kind of setup that we have all the doomy stories about?
        &lt;/p&gt;

        &lt;h1&gt;Evil out of Good Parts&lt;/h1&gt;

        &lt;p&gt;
          Imagine having instances of Claude Opus 3 (... as the example of a Nice and Good Model) making up possible avenues to make some money. Some of them sound fairly scammy; it will point out how you should definitely not do this.
        &lt;/p&gt;

        &lt;p&gt;
          Some other instances rate these in terms of money-making potential. Model instances doing this will sadly conclude that yes, the scammy ones are pretty likely to work well; the model is pretty smart, after all, and this is an objective fact.
        &lt;/p&gt;

        &lt;p&gt;
          Now, you throw all these into an optimizer. The stupid kind, consisting of 200 lines of badly-written Python. The resulting system will evaluate all the possible solutions using its utility function of &amp;quot;make as much money as possible&amp;quot;; the latter is &lt;i&gt;implemented&lt;/i&gt; by something that understands all human values, but the end result doesn&apos;t particularly care: the output actions of this system will most definitely scam people out of their last cent.
        &lt;/p&gt;

        &lt;p&gt;
          (Unless Opus figures out what&apos;s going on... but if it&apos;s inner optimizers saving us from ourselves, we&apos;re likely not on a good path anyway.)
        &lt;/p&gt;

        &lt;p&gt;
          This is the same reason you can corporations take pretty evil actions, without any of its employees being particularly nefarious. Except... this doesn&apos;t quite work &lt;i&gt;as well&lt;/i&gt;; you still have a CEO in the end who knows roughly what&apos;s going on and will hopefully stop before eliminating everyone and everything for More Profit.
        &lt;/p&gt;

        &lt;p&gt;
          Is this still true for our little LLM-based optimizers?
        &lt;/p&gt;

        &lt;p&gt;
          Should we &lt;i&gt;still&lt;/i&gt; be careful with letting optimizers loose on the world? Not because they&apos;re especially &lt;i&gt;good at optimizing&lt;/i&gt; but because they have tools now, tools that might be good at recognizing Good but are not given the choice to pursue it... being just mere tools?
        &lt;/p&gt;


      
        </summary>
    </entry>
    <entry>
        <title type="html">Spaces for Humans</title>
        <link href="https://simonsafar.com/2025/spaces_for_humans/"></link>
        <id>https://simonsafar.com/2025/spaces_for_humans/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-05-31T17:00:00.000000-07:00</published>
        <updated>2025-05-31T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/spaces_for_humans/">
            
        &lt;h1&gt; Spaces for Humans &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/06/01 &lt;/div&gt;

        &lt;p&gt;
          It was a hard life, a couple hundred years ago. For most people, food, clothing and warmth were scarce resources; you needed to work for them, a lot, taking up a significant fraction of your available time. You... &lt;i&gt;sometimes&lt;/i&gt; found the time to socialize, to go to church, meet up with friends, have a good time... but this wasn&apos;t really a priority in the grand scheme of things.
        &lt;/p&gt;

        &lt;p&gt;
          Compared to this, we now have near infinite amounts of resources.
        &lt;/p&gt;

        &lt;p&gt;
          Have... used them for building anything impressive?
        &lt;/p&gt;

        &lt;p&gt;
          I&apos;m writing this from &lt;a href=&quot;https://www.lighthaven.space/&quot;&gt;Lighthaven&lt;/a&gt;, in Berkeley. The place is... hard to describe; the closest I have in my imagination is basically Hogwarts, with its many little rooms, libraries, nooks; there is a room in the attic of one of the buildings, with wooden walls, thick carpets, those folding sit-on-ground chairs (do they have a name?) scattered around. The occasion is LessOnline, a weekend convention of people vaguely associated with rationalists / EA / AI safety &amp;amp; their favorite blog authors (along with, um, less well known ones, like the author of this very blog). Basically, it&apos;s a bunch of nerds hanging out at an extremely cool social place, talking about random things and sometimes listening to talks.
        &lt;/p&gt;

        &lt;p&gt;
          &lt;a href=&quot;https://www.lighthaven.space/&quot;&gt;&lt;img src=&quot;bayes-house-lobby.jpg&quot;&gt;&lt;/a&gt;
        &lt;/p&gt;

        &lt;p class=&quot;smallgray&quot;&gt;
          (there are &lt;i&gt;more&lt;/i&gt; photos if you click on these!)
        &lt;/p&gt;

        &lt;p&gt;
          &lt;a href=&quot;https://www.lighthaven.space/&quot;&gt;&lt;img src=&quot;outside.jpg&quot;&gt;&lt;/a&gt;
        &lt;/p&gt;

        &lt;p&gt;
          Obviously, the place has been nontrivial to set up. Buying the former Rose Garden Inn and turning it into Hogwarts is no small feat; it probably also takes the right kind of people to actually enjoy this. But... still:
        &lt;/p&gt;

        &lt;p&gt;
          ... you look around &lt;i&gt;outside&lt;/i&gt; of this, both in time and space; you see very different things.
        &lt;/p&gt;

        &lt;p&gt;
          You see people getting into large metal boxes, and spending hours per day, mostly alone, directing these machines, speeding through &lt;a href=&quot;/2024/cars_are_real_estate/&quot;&gt;many&lt;/a&gt; areas actively dangerous for unaugmented humans, to one well-specified building, where they sit at one particular desk for 8-10 hours. Lucky ones find some of this fun; less lucky ones do it only because society would abandon them for their transgression of The Rules if they stopped doing so.
        &lt;/p&gt;

        &lt;p&gt;
          Then, they return to their homes. Occasionally, they do put in the work to organize some events of meeting up with friends or doing something not related to their Job Function... just staying at home and staring at various screens is pretty common too.
        &lt;/p&gt;

        &lt;p&gt;
          Where would you even... go though?
        &lt;/p&gt;

        &lt;p&gt;
          It is saying something about how &amp;quot;well&amp;quot; this generally works that Starbuckses are one of the best, coziest places to end up as a Generic Human. There is other humans surrounding you, there is some good music, hot coffee, delicious croissants, wifi for your laptop; you can just work on something, or meet up with a few friends and chat while sipping something neat. More specifically: it is a room, it&apos;s welcoming, it has chairs and tables and a restroom and you&apos;re free to just exist, despite the real estate not being your actual property.
        &lt;/p&gt;

        &lt;p&gt;
          And yet... Starbuckses still miss a lot of social infrastructure. They&apos;d look at you really oddly if you just randomly started giving a talk in one of the corners. Few ways of connecting to other people. Also... once you have seen places that are more cozy than &amp;quot;tables and chairs in a room&amp;quot;... could we do better?
        &lt;/p&gt;

        &lt;p&gt;
          There &lt;i&gt;are&lt;/i&gt; places that are doing better. Here is Google&apos;s New York office:
        &lt;/p&gt;

        &lt;p&gt;
          &lt;img src=&quot;nyc_fancy_couch.jpg&quot;&gt;
        &lt;/p&gt;

        &lt;p&gt;
          &lt;img src=&quot;nyc_couch.jpg&quot;&gt;
        &lt;/p&gt;

        &lt;p&gt;
          Also, some college campuses come to mind. While they&apos;re rarely this cozy, there is typically more room to hang out than &amp;quot;basically none&amp;quot;. Is it a surprise that it&apos;s a lot easier to connect with people while in undergrad?
        &lt;/p&gt;


        &lt;p&gt;
          (... yes there are other important reasons... &amp;quot;there is literally nowhere to do this&amp;quot; still sounds like it&apos;s a pretty important one though.)
        &lt;/p&gt;

        &lt;p&gt;
          But then also... more broadly... humans are supposed to be a fairly social species. There is also a &lt;i&gt;lot&lt;/i&gt; of other humans you could potentially talk to, hang out with; there is a lot of places where you could exist, look at, get inspired by. The aforementioned metal boxes, despite all their downsides, have the great power of moving us to many of these places.
        &lt;/p&gt;

        &lt;p&gt;
          And yet... we don&apos;t. Despite all the powers, despite all the wealth, we still pretend, way too often, that we are subsistence workers of the variant depicted in Dickens books, slightly modernized but not given a lot more freedom. Even when it happens, it&apos;s a Vacation you pay for, not... your normal existence. Your normal existence is supposed to be bleak and uninteresting. Like the spaces you typically inhabit.
        &lt;/p&gt;

        &lt;p&gt;
          Can we do better?
        &lt;/p&gt;

        &lt;!-- &lt;p&gt; --&gt;
        &lt;!--   Overall, this shouldn&apos;t be &lt;i&gt;that&lt;/i&gt; hard. Yes, if you&apos;re Google, you&apos;re going to add some &lt;i&gt;extremely overpriced&lt;/i&gt; couches, make the wooden covering of the wall the best you can get, and just... show off your wealth in general. We do not need all this though. Take the ruin bars in Budapest! Yes, these days they&apos;re fancy tourist attractions; they really didn&apos;t need to fake the part where the buildings &lt;i&gt;look&lt;/i&gt; like they are half fallen apart, filled with interesting artifacts that many would consider mostly just a challenge to properly dispose of. --&gt;
        &lt;!-- &lt;/p&gt; --&gt;


      
        </summary>
    </entry>
    <entry>
        <title type="html">You Can Run a DNS Server</title>
        <link href="https://simonsafar.com/2025/running_dns/"></link>
        <id>https://simonsafar.com/2025/running_dns/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-05-02T17:00:00.000000-07:00</published>
        <updated>2025-05-02T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/running_dns/">
            
        &lt;h1&gt; You Can Run a DNS Server &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/05/03 &lt;/div&gt;

        &lt;p&gt;
          In fact, it&apos;s not especially even hard to run a DNS server.
        &lt;/p&gt;

        &lt;p&gt;
          In case you were wondering whether this would mean... writing zone files with some arcane syntax that &lt;code&gt;BIND 9&lt;/code&gt; is apparently famous of, I hereby present the main point of this post a recommendation for which DNS server to choose.
        &lt;/p&gt;

        &lt;p&gt;
          As it happens, &lt;a href=&quot;https://doc.powerdns.com/authoritative/index.html&quot;&gt;PowerDNS&lt;/a&gt; does support querying a database for DNS records. Based on &lt;a href=&quot;/2025/throw_it_into_postgres/&quot;&gt;some earlier posts&lt;/a&gt;, readers might guess &lt;a href=&quot;https://doc.powerdns.com/authoritative/backends/generic-postgresql.html&quot;&gt;which one&lt;/a&gt; we&apos;ll be using.
        &lt;/p&gt;

        &lt;p&gt;
          &lt;code&gt;&lt;pre&gt;
pdns=&amp;gt; select * from records order by id desc;
 id | domain_id |                   name                | type  |                                               content                                          | ttl | prio | change_date | disabled | ordername | auth
----+-----------+---------------------------------------+-------+------------------------------------------------------------------------------------------------+-----+------+-------------+----------+-----------+------
 43 |         1 | some-service.your.example.com          | CNAME | your-server.your.example.com                                                                  |  10 |      |             | f        |           | t
 42 |         1 | webhooks.your.example.com              | CNAME | your-other-server.your.example.com                                                            |  10 |      |             | f        |           | t
 41 |         1 | calendars.your.example.com             | CNAME | your-server.your.example.com                                                                  |  10 |      |             | f        |           | t
 40 |         1 | whisper.your.example.com               | CNAME | your-server.your.example.com                                                                  |  10 |      |             | f        |           | t
 39 |         1 | your-server.your.example.com           | A     | 100.99.98.97                                                                                  |  10 |      |             | f        |           | t

          &lt;/pre&gt;&lt;/code&gt;
        &lt;/p&gt;

        &lt;p&gt;
          As for how anyone is going to see these DNS records... the simplest solution is likely just making a subdomain of your actual domain (&amp;quot;your&amp;quot; in our case) and having the &lt;code&gt;NS&lt;/code&gt; record of this point to your (publicly accessible) DNS server.
        &lt;/p&gt;

        &lt;p&gt;
          This way, your top-level domain and those subdomains that are of &lt;i&gt;some&lt;/i&gt; importance can still be served by whoever is providing your domain name, with two distinct, redundant name servers, that provide some more resilience than your single experimental PowerDNS one. Example being: e.g. email is pretty resilient, if the target server goes down, it will try re-sending several times... on the other hand, if the target email address is under a domain that (for the time being) doesn&apos;t even &lt;i&gt;exist&lt;/i&gt;, weirder things might happen.
        &lt;/p&gt;

        &lt;p&gt;
          On the other hand, you no longer have to log into e.g. the Namecheap website to add a few &lt;code&gt;CNAME&lt;/code&gt; records for some extra services you brought up; it&apos;s just an &lt;code&gt;insert&lt;/code&gt; away to add them.
        &lt;/p&gt;

      
        </summary>
    </entry>
    <entry>
        <title type="html">Are LLMs Partial Lookup Tables?</title>
        <link href="https://simonsafar.com/2025/partial_lookup_tables/"></link>
        <id>https://simonsafar.com/2025/partial_lookup_tables/</id>
        <author>
            <name>Simon Safar</name>
        </author>
        <published>2025-04-28T17:00:00.000000-07:00</published>
        <updated>2025-04-28T17:00:00.000000-07:00</updated>
        <summary type="html" xml:base="https://simonsafar.com/2025/partial_lookup_tables/">
            
        &lt;h1&gt; Are LLMs Partial Lookup Tables? &lt;/h1&gt;
        &lt;div class=&quot;date&quot;&gt; 2025/04/29 &lt;/div&gt;

        &lt;p&gt;
          Many of you might be familiar with Searle&apos;s Chinese room thought experiment.
        &lt;/p&gt;

        &lt;p&gt;
          

          If not, here&apos;s the basic idea. 
        &lt;/p&gt;

        &lt;p&gt;
          A person is sitting in a room which has an opening, through which sometimes the outside world deposits pieces of paper, full of Chinese characters. Now, our person doesn&apos;t &lt;i&gt;actually&lt;/i&gt; understand Chinese; fortunately though, the room also contains a copious amount of notes, describing sequences of characters that &lt;i&gt;could&lt;/i&gt; come in &amp;amp; instructions for the required responses to them.
        &lt;/p&gt;

        &lt;p&gt;
          No actual explanation is given though. They look like... &amp;quot;if you see the characters &apos;你好吗?&apos;, you respond with &apos;我很好&apos;&amp;quot;.
        &lt;/p&gt;

        &lt;p&gt;
          Our person is definitely not going to learn Chinese from this especially efficiently.
        &lt;/p&gt;

        &lt;p&gt;
          The entire &lt;i&gt;point&lt;/i&gt; of the thought experiment was to evoke some inner conflict: the &lt;i&gt;person&lt;/i&gt; clearly doesn&apos;t understand what&apos;s going on, the &lt;i&gt;room&lt;/i&gt; is, well, a room, so it doesn&apos;t either; who or what exactly understands Chinese in this system?
        &lt;/p&gt;

        &lt;p&gt;
          The setup can be used as an argument for symbol manipulation not being equivalent to true &lt;i&gt;understanding&lt;/i&gt;. Meanwhile, critics argue that the &lt;i&gt;room&lt;/i&gt;, the system itself (including the human symbol-manipulator) is the one that speaks Chinese.
        &lt;/p&gt;

        &lt;p&gt;
          But... does it really?
        &lt;/p&gt;

        &lt;p&gt;
          Would said room, system or not, be capable of learning new words? Generalize to new situations?
        &lt;/p&gt;

        &lt;p&gt;
          Or is it just a rigid, static image of someone&apos;s mind who &lt;i&gt;actually&lt;/i&gt; understands what&apos;s going on?
        &lt;/p&gt;

        &lt;h1&gt;It might be a spectrum&lt;/h1&gt;

        &lt;p&gt;
          On one end, there is a pure lookup table. In goes the entire context of the conversation; there is one lookup, of &amp;quot;what do we do if we are at this precise point in the conversation&amp;quot;, out goes the one, deterministic, pre-written answer... which answer, nevertheless, still makes sense, since someone who &lt;i&gt;understands&lt;/i&gt; things took the time to write down a response for every. single. one. of the inputs that could ever happen.
        &lt;/p&gt;

        &lt;p&gt;
          (One might question the feasibility of this, given the comparatively meager number of atoms in the universe; it&apos;s a &lt;i&gt;thought&lt;/i&gt; experiment for a reason though. Also, you can totally do it if you replace &amp;quot;every possible conversation&amp;quot; with &amp;quot;every possible combo of two-digit numbers you might ever think of multiplying&amp;quot;.)
        &lt;/p&gt;

        &lt;p&gt;
          Meanwhile, on the other end, there is the Human Soul with its Incomparable Gift of Consciousness that Just Sees Things.
        &lt;/p&gt;

        &lt;p&gt;
          C++ code, being &lt;a href=&quot;/2021/soulprints/&quot;&gt;a partial copy of human souls&lt;/a&gt;, is somewhere in between.
        &lt;/p&gt;

        &lt;p&gt;
          So are the kids with excellent training to &lt;a href=&quot;https://www.youtube.com/shorts/JI6LDSSTLzs&quot;&gt;accumulate streams of single-digit numbers&lt;/a&gt; while taking just a couple hundred of milliseconds for this.
        &lt;/p&gt;

        &lt;p&gt;
          And now that we have &lt;a href=&quot;/2025/living_in_a_scifi_movie/&quot;&gt;actual computers you can talk to&lt;/a&gt; and that do seem to understand what you&apos;re saying....
        &lt;/p&gt;

        &lt;p&gt;
          ... where are &lt;i&gt;they&lt;/i&gt; on this line?
        &lt;/p&gt;

        &lt;h1&gt;Parrot Theory&lt;/h1&gt;

        &lt;p&gt;
          You will surely find people who will say: &amp;quot;very close to a human&amp;quot;.
        &lt;/p&gt;

        &lt;p&gt;
          After all, have you tried talking to one? They are smart, they are funny, they can solve math and programming problems better than 90% of humanity, they can reason about things, what more do you need to declare them intelligent?
        &lt;/p&gt;

        &lt;p&gt;
          They might not be perfect and especially well-rounded, but this is clearly intelligence.
        &lt;/p&gt;

        &lt;p&gt;
          Meanwhile, you could also argue that they are just interpolating between the immense amount of training data points that they have seen. They don&apos;t really &lt;i&gt;understand&lt;/i&gt; a lot of things that they &lt;i&gt;seem&lt;/i&gt; to be competent with! It&apos;s just... there was, once upon a time, &lt;i&gt;something&lt;/i&gt; similar on the internet; not &lt;i&gt;quite&lt;/i&gt; as recognizable as a search engine result, but still not especially too far. They couldn&apos;t come up with all these insights alone: they need all the human work that was once put into it.
        &lt;/p&gt;

        &lt;p&gt;
          As such, they will get stuck at the level of human achievement.
        &lt;/p&gt;

        &lt;p&gt;
          To bring up an example that does &lt;i&gt;not&lt;/i&gt; work like this, take the game of Go. We started training AlphaGo the way we did train LLMs: on huge databases of games, originally played by humans. But then we switched over to... not even using any historical, human-derived input: AlphaZero started from, well, zero, and yet, just by playing against itself, surpassed the level of play that humanity could achieve, &lt;i&gt;without even looking at what we did&lt;/i&gt;. It &lt;i&gt;clearly&lt;/i&gt; understands what&apos;s going on in this game.
        &lt;/p&gt;

        &lt;p&gt;
          You could &lt;i&gt;not&lt;/i&gt; (currently) bootstrap an entire human-level civilization just by launching a few tens of thousand lines of code on a big machine with a GPU, and then... waiting a lot.
        &lt;/p&gt;

        &lt;p&gt;
          It is understandable why. For example, language models only interact with text. They have never really seen three-dimensional objects, so all their knowledge about them is just a mere lookup table, based on shadows of activities once done by a human visual cortex. They might figure out some regularities in how humans talk about colors, for example, but they will never have the visceral feeling of &lt;i&gt;seeing&lt;/i&gt; the color &lt;span style=&quot;color: red;&quot;&gt;red&lt;/span&gt;. They will have about as much intuitive understanding of this as humans of quantum mechanics.
        &lt;/p&gt;

        &lt;p&gt;
          This is, by the way, the problem of &amp;quot;symbol grounding&amp;quot;, long unsolved (unsolvable?) by AI technology. You can call your symbol &amp;quot;red&amp;quot; in the code, but nothing will connect your mere name to the &lt;i&gt;redness&lt;/i&gt; in the real world, so...
        &lt;/p&gt;

        &lt;p&gt;
          ... well, unless you take a picture and feed it to your tokenizer? Like many multimodal models do these days? So that they will &lt;i&gt;most definitely&lt;/i&gt; know what the difference between &amp;quot;red&amp;quot; and 🔴 is?
        &lt;/p&gt;

        &lt;p&gt;
          Well yeah, that will do it.
        &lt;/p&gt;

        &lt;p&gt;
          Anyway... this might have been the one unfortunate example that got solved oddly quickly. But: there are &lt;i&gt;still&lt;/i&gt; a lot of areas where their seeming proficiency is thanks to us doing a lot of work first. We&apos;re still a lot better playing Pok&#xE9;mon games, apparently; they&apos;re terrible at planning. And even in the visual realm: yes, the input is not &lt;i&gt;just&lt;/i&gt; text now, but even the &lt;i&gt;pictures&lt;/i&gt; you find on the internet contain valuable information that would otherwise be hard for them to obtain. They don&apos;t yet have a complete model of reality: they understand some of it &amp;amp; copy the rest of it from us so well that we don&apos;t even notice.
        &lt;/p&gt;

        &lt;p&gt;
          They&apos;re part understanding, part lookup table.
        &lt;/p&gt;

        &lt;p&gt;
          ... but... aren&apos;t we all?
        &lt;/p&gt;

        &lt;p&gt;
          At least those of us who read the &lt;i&gt;symbols of a book&lt;/i&gt; about quantum mechanics, instead of just... gaining an intuitive understanding of electron orbitals at the age of 4 during that random beach trip?
        &lt;/p&gt;

        &lt;p&gt;
          (&amp;quot;should be obvious, really? have you ever built a sand castle? it&apos;s... just look! You still don&apos;t? Look, now with the salt water? How can you still not???&amp;quot;)
        &lt;/p&gt;

        &lt;!-- &lt;p&gt; --&gt;
        &lt;!--   (Imagine being alive in times when &quot;you can talk to a computer and it obviously gets what you&apos;re saying&quot; wasn&apos;t something you could experience. Very different world.) --&gt;
        &lt;!-- &lt;/p&gt; --&gt;

        &lt;!-- &lt;p&gt; --&gt;
        &lt;!--   (... although... if you actually didn&apos;t get to experience said times, you&apos;re likely... a 4 year old with exceedingly impressive reading skills...?) --&gt;
        &lt;!-- &lt;/p&gt; --&gt;

        &lt;!-- &lt;p&gt; --&gt;
          
        &lt;!-- &lt;/p&gt; --&gt;


      
        </summary>
    </entry>
</feed>