❀✿❀ SuperLaserNino ✿❀✿

3/3: Mathematical notation

13 June 2016

Modified: 12 October 2016

1876 words

[Part 3 is about communicating mathematical ideas. Part 1, Part 2. I took care to contain the tedious math bits in single paragraphs, so the point is still clear if you choose to only read the fun parts.1]

summary. There is no such thing as “wrong” notation. All that counts is that you get the math right and communicate your ideas clearly.


Last time I explained how it’s not accurate to say that an electron “is” a wave function, because an electron is a thing in the universe and a wave function is a mathematical object, and mathematical objects don’t live in the real universe. When people talk about wave functions, they often use the letter ψ\psi. Obviously, even though it looks all nice and wavy, the ψ\psi itself isn’t the wave function either – it’s just its name. The concept of names is one we know and love from the real world: When I point at a chair and say, “This is Bob,” it’ll be clear what I mean when I explain that Bob has three legs. While it’s a terrible idea to call a chair Bob, giving things and their relationships with each other funny names is basically what mathematics is all about.

Just like we grew up believing that dictionaries had authority over the reality of words, school taught us that ++ means you add two numbers, - means you subtract them, ×\times means you times them, and so on. But these symbols weren’t handed down from the heavens to the first humans to walk the Earth. There was a time when they didn’t exist, and then someone made them up. Now, +,,/,×+, -, /, \times are pretty basic and sometimes you may even have a use for them in every day life, so these symbols are generally assumed to refer to their corresponding arithmetic operation. There are a handful of other symbols that are pretty unambiguous in their meaning, like \sqrt{\cdot} or ==, but beyond this lies madness.

madness 1: when wrong is right and right is complicated

The slope of a function graph is called the function’s derivative. (If you’re familiar with, like, math, this may be known to you.) When your function is a straight line, you get the slope by dividing the difference between two function values by the difference of their arguments. When we write the differences as Δf\Delta f and Δx\Delta x, the derivative can be written as f=Δf/Δxf' = \Delta f / \Delta x. Here, both Δf\Delta f and Δx\Delta x are real numbers. When you have an arbitrary curve instead of a straight line, you can approximate the slope by choosing Δf\Delta f and Δx\Delta x very small. The smaller you make them, the more accurate the result will be. Want infinite accuracy? Make them infinitely small. To make it clear that you’re working with infinitely small numbers (“infinitesimals”), you call them df\mathrm d f and dx\mathrm d x, which gives you f=df/dxf'=\mathrm d f/\mathrm d x. Yay!

But … what are df\mathrm d f and dx\mathrm d x? Both are infinitely small, right? So if you try to calculate df\mathrm d f, you get 00. And if you try to calculate dx\mathrm d x, you get 00, too. If you took any other value for them, they’d no longer be infinitely small, and thus you’d get an inaccurate result. Thus, if df/dx\mathrm d f/\mathrm d x were a normal fraction like Δf/Δx\Delta f/\Delta x, it would be equal to 0/00/0, and we all know never to divide by zero. Hence, since df/dx\mathrm d f/\mathrm d x does have a value, it must be something else entirely. Remember part 2, where I wrote,

If Newtonian mechanics is wrong, why do we still use it so damn much?

In that post, I explained that Newtonian mechanics often gives us the best prediction we can make, and using a “more correct” model would not give us a better result. Maybe this situation is similar: what do we get if we pretend df\mathrm d f and dx\mathrm d x are numbers, and that we just don’t know their values?

example 1. Say you’re told to solve the equation f(x)f(x)=x2f(x) \cdot f'(x) = x^2. This may look daunting at first, but when you write the derivative as df/dx\mathrm d f/\mathrm d x instead of ff', you get

f(x)dfdx=x2.f(x) \cdot \frac{\mathrm d f}{\mathrm d x} = x^2\,.

and multiplying each side by dxdx gives you f(x)df=x2dxf(x)\,df=x^2\,dx. This looks like integrals without the integral signs, so let’s put some on both sides:

f(x)df=x2dx.\int f(x)\,df = \int x^2\,dx\,.

Now we have f2/2=x3/3f^2 /2 = x^3 / 3, so f(x)=2/3x3/2f(x)=\sqrt{2/3} x^{3/2}, and from this you can calculate the derivative f(x)=2/3(3/2)xf'(x) = \sqrt{2/3} (3/2) \sqrt x. Popping this back into our initial equation, we get 2/32/3(3/2)x3/2+1/2=x2\sqrt{2/3}\cdot \sqrt{2/3}\cdot (3/2) \cdot x^{3/2 + 1/2} = x^2. The roots of (2/3)(2/3) combine to a full (2/3)(2/3), which then cancels with (3/2)(3/2), and you’re left with x2=x2x^2=x^2, which tells you that your solution is correct. /example 1.

Owls sitting in shoes
As a reward for getting through the last paragraph, here's a picture of plush owls sitting in shoes. Inhale. Exhale.

In other words, we used a “mathematically” “wrong” approach to correctly solve a problem. In many situations, this is even a good idea. As long as you can prove that what you’re doing works, using symbols that look less mathematically rigorous but lead you to the solution more intuitively can save a lot of time and even help prevent mistakes.

The cool thing about this is: Many people have already realized this, which is exactly the reason we have f(x)=df/dxf'(x)=\mathrm d f/\mathrm d x and curlv=×v\mathrm{curl}\,\vec v = \nabla\times\vec v and so on, which means you can often be pretty wishy-washy about your notation and still end up making fewer mistakes.

madness 2: when math isn’t all clear and unambiguous

Mathematics is known for being clear and unambiguous. And yes, we can definitively2 prove that a theorem is either true or false, in contrast to the sciences where we only have falsifiable hypotheses and probabilities. But the language of math is just as bad as the language of language. Languages take shortcuts, sacrificing semantic clarity for the sake of data transmission rates. This is okay because most of the time everyone knows what you’re talking about.3 They tell you that mathematics doesn’t work that way, but I’m going to make the case that it does.

You know how you do your particle physics homework, and you use the symbol mem_e, and the only thing that symbol has ever stood for was the mass of an electron, and your teacher tries to make this elaborate argument about the importance of declaring your variables but somehow they completely miss that you never told anyone what π\pi means or what ee means or what log\log means, and so on? But then the cutoff point between what you need to define and what’s “obvious” isn’t really clear, and it becomes this huge frustrating mess? That’s the kind of thing I’m talking about. Or you say, “Let pp be the momentum operator,” and your professor complains that pp can’t be an operator because operators always need to have a hat, like p^\hat p, and you say, no, you defined pp to be the operator and shut up you’re being ridiculous, but the professor insists and you end up having to draw a little hat on every single instance of the letter pp in your equations even though leaving it out would give you 100% the correct result and cause zero confusion.

example 3. You have a function f(x,t)f(x,t) you want to integrate over xx.4 You’ll write something like abf(x,t)dx=[F(x,t)]ab\int_a^b f(x,t)\,dx=[F(x,t)]_a^b, right? And here it’s totally not clear if the brackets are to be evaluated with xx as aa and bb or tt as aa and bb. You know, from looking to the left of the equals-sign, but it isn’t clear just by looking at the right half of the equation. Likewise, some authors write volume integrals as Vf(r1,r2)dτ\int_V f(\vec r_1, \vec r_2)\,d\tau, where it’s unclear whether they’re integrating over r1\vec r_1 or r2\vec r_2. They fix this problem by putting explanations in the text and following conventions throughout the book so it’s clear from context what they mean. /example 3.

example 4. Or, instead of integrals, let’s talk about derivatives. When you have a bunch of equations with many partial derivatives, it can be frustrating to write Vx/x,Vy/x,Vx/y\partial V_x/\partial x, \partial V_y/\partial x, \partial V_x/\partial y, and so on, over and over. This is because you’re told that the components of a vector field V\vec V must always be written as (Vx,Vy,Vz)(V_x,V_y,V_z). But since all these letters are only names, you can simply rename the components. For example, you could call the vector field V=(X,Y,Z)\vec V = (X,Y,Z). This already saves you the work of writing a subscript every time you reference one of the components of V\vec V. But as an added bonus, you can now use the subscripts for other purposes, like partial derivatives. Thus, you can define Vx/x\partial V_x/\partial x as XxX_x, Vy/x\partial V_y/\partial x as YxY_x, and so on. This is much shorter and way more fun! I tried that once and my TA was hopelessly confused because they didn’t understand that indices on vectors don’t have to mean selecting the corresponding component, even though I explicitly defined what everything means at the top of the page. /example 4.

Context matters when writing down equations. Everything doesn’t have to be clear in isolation, as long as you explain what’s happening. Obviously this doesn’t mean that you can just write literally anything because then it wouldn’t be clear anymore what you mean. But what you can do is invent new notation and use that if it makes sense. Note, however, that making up your own things isn’t always a good idea: there already exists a large set of shared expectations about what many symbols do and, often, it makes sense to go with established conventions. Like if you’re using other people’s equations, you shouldn’t just exchange all the letters for no good reason, even if you feel like ξ\xi is a much nicer letter than λ\lambda.

In conclusion: Be free, be spontaneous, be brave – give your equations meaning instead of useless hats and subscripts. Sometimes, you really don’t have to repeat yourself.

Footnotes

  1. In the future, when I have a list of my most notable essays, this one will be “The Long, Confusing, Meandering One.” This is my A Feast For Crows in terms of exciting action; it’s my American Gods in terms of quickly getting to the point; it’s my Getting Things Done in terms of elegant phrasing – you get the idea. Think of this more as a piece of performance art, rather than an informative article.

  2. If you ignore external uncertainties.

  3. Except when you’re writing a 2000 word essay on how to use mathematical notation without an outline. What was this guy thinking?

  4. I’m so sorry about all the integrals. And all the footnotes.

2/3: Models

30 January 2016

Modified: 13 June 2016

1671 words

[Part 2 is the best part. Part 1, part 3.]

In science, we try to understand the world by building models and theories that describe it. You see an apple falling on your head, think, “oh, maybe that’s how the planets move, too”, and you write down rules that allow for the motion of planets and don’t allow for some phenomena you do not see, like things falling upward. You call the collection of those rules your model, or theory. When you have your model, you perform more experiments to test it, and every time your model’s prediction roughly matches your observations, you get more confident that your model is correct.

What does it mean for a model to be correct? This is where the trouble starts. In school we learn that classical mechanics is a pretty good approximation of reality, but quantum mechanics and relativity is the correct theory of how the universe works.1 This framing has always bothered me: If Newtonian mechanics is wrong, why do we still use it so damn much?

Say you throw your keys out the window, and you want to calculate the path they will take to the ground as exactly as possible. So you get out your pencil and notebook and you start scribbling. Should you do your calculations relativistically? It would be more work, but you want to be really exact, so you add a bunch of γ’s everywhere and do your calculations relativistically. Then you notice that you’ve been assuming a flat earth the whole time. Oh no! All right – the earth’s a sphere, right? Let’s use that and we get an ellipse instead of a parabola for the flight path of our keychain. So – is the result more accurate than the classical, flat-earth one? Certainly not a lot more, but maybe a little? Nope. Not one bit. Why? Because the difference the air resistance makes, and the uncertainty of the direction you’re throwing in is much bigger than the difference a relativistic calculation could make.

This still doesn’t mean that objects in our everyday lives have a different nature than single electrons or supermassive black holes. It just means that, if you put enough electrons and protons and stuff together, and you don’t make them too dense or too fast, you can predict what they’re going to do by using the model of classical mechanics.

Many physics students, when they’re starting out, seem to feel like they’ve been promised something. That they’ll be led behind the curtains of reality and shown how the world really works. They seem to accept Newtonian mechanics – it works, after all. Medium sized objects, apparently, are Newtonian in nature. But it doesn’t take long for the disappointments to start. “Ideal gases don’t exist in nature, but it’s a simple model that works relatively well for lots of stuff,” they tell us. We’re not happy, but we’ll take the approximation, for now. We’re relieved when they teach us the “real gas” models, like Van der Waals gases. Then it gets worse again: “Ideal fluids are a pretty absurd approximation. There are no ideal fluids in the real world, and for most fluids, you don’t even get very good results using this model. But it’s simple, and it teaches the principles that you need to understand to work with better fluid models later.” We’re not taught the more complicated fluid models in that semester, and it leaves us with a quasy feeling. Why are we being taught a rough approximation instead of the correct model?

After a few semesters, the students get herded into a lab, to perform their first experiments themselves. Their belief is already shaken by countless lectures only teaching rough approximations instead of the real thing. But this is worse. Here, they finally see how the sausage is made. “All of physics is just estimations and approximations!” they exclaim. “Nothing here is exact!” It slowly sinks in that this is not just a rough approximation of what physicists do. Physics really is just approximations. Dutifully, the students draw error bars in their hand-crafted plots of noisy data, and wheep.

What’s important is that this isn’t a bad thing, and especially not a preventable thing. The approximations aren’t the result of laziness. The small inaccuracies in every scientific theory are the result of countless hours of patient, skillful labor. It’s awesome that we can make very accurate predictions about the behavior of gases just using pV=NT, instead of calculating the exact position and momentum of every elementary particle in our system. Because, by the way, that “system” is the entire universe. It’s super cool that we can just pretend planets are single points in space, with a mass and no size, only feeling the gravity of the sun and not each other’s, and still predict their orbits with great accuracy. Planets aren’t spheres, their orbits aren’t circles, Kepler’s Laws of Planetary Motion aren’t woven in the fabric of the universe, and yet, pretending all this is true will get us to Mars.

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. — George E. P. Box

The goal here is not “understanding”. The goal is making good predictions. I see my fellow students understanding that ideal gases don’t exist in nature, but I don’t see them make the jump to “Van der Waals Gases don’t exist any more than ideal ones do”. I don’t see them understanding that “quantum wave functions” are a mathematical function instead of a thing, out there in the universe. The problem is that physics ventures so deep into the hidden parts of reality, that it’s no longer intuitively clear that there is a distinction between the map and the territory. They tell you about the paradox of the double slit experiment and you conclude, “electrons aren’t just particles”. They show you Schrödinger’s equation and solve it to get the wave function. “That explains it!” you think, and you conclude that electrons are wave functions.

But the universe does not run on math. The reason the universe looks so much like it’s made of math when you apply science to it is because math is really really versatile. But this doesn’t mean our Laws of Physics are more than summaries of our observations. It’s not the universe that is good at being modeled by math – it’s the math that is good at modeling anything, be it our universe or universes with different rules.

I’ve heard someone say, after reading a mechanics textbook, that they finally understand why perpetual motion is impossible. It’s because something something holonomic constraints can’t do any work because something dot product. This can’t possibly be true because that’s not the order in which things happened. First the perpetual motion machine didn’t work, then the theory was written and it was written in such a way that perpetual motion machines don’t work, and that’s why something something holonomic constraints forbids them. If the textbook had explained, in detail, why perpetual motion machines do work, that wouldn’t have made it true.

We once had a homework exercise where we were supposed to say why a particle behaved in a certain way. The obviously “correct” answer – the teacher’s password – was “because of Heisenberg’s Uncertainty Principle”. But the Uncertainty Principle just follows from Schrödinger’s equation, and we’re using that to solve all our quantum mechanics problems. So by that logic, basically everything happens because of the Heisenberg Uncertainty Principle. That can’t be right.

For example, when a pen falls off a desk, that seems to be proof that gravity exists, because gravity made it fall. But what is “gravity”? In 1500, “gravity” was the pen’s desire to go to the center of the earth; in 1700 “gravity” was a force that acted at a distance according to mathematical laws; in the 1900s “gravity” was an effect of curved space-time; and today physicists theorize that “gravity” may be a force carried by subatomic particles called “gravitons”. Gendlin views “gravity” as a concept and points out that concepts can’t make anything fall. Instead of saying that gravity causes things to fall, it would be more accurate to say that things falling cause [the different concepts of] gravity. Interaction with the world is prior to concepts about the world. (source)

It’s not the laws we have written down that tell reality what to do. It’s reality that tells us what laws to write. Writing down the law will not make reality obey it. But reality doing something unexpected will make us write a new law.

The point I’m trying to make here is, when you have electrodynamics homework to do, and taking a few shortcuts by pretending stuff doesn’t interact as much as the theory says will allow you to finish in 6 pages instead of 47, maybe you should do that. Because there is no “correct” model. You’ll never know what matter is “really” made of. All you can ask for is a good prediction.

Footnotes

  1. Y’know, disregarding the fact that we still haven’t found a way to combine the two to make black holes work.

Victory!

25 January 2016

1396 words

People tell me I should go to a CFAR workshop and they may well be right, so it’s time to figure out how to prevent what is inevitably going to happen there from happening.

each of the workshop’s sessions invariably finished with participants chanting, ‘‘3-2-1 Victory!’’ — a ritual I assumed would quickly turn halfhearted. Instead, as the weekend progressed, it was performed with increasing enthusiasm. By the time CoZE rolled around, late on the second day, the group was nearly vibrating. When Smith gave the cue, everyone cheered wildly, some ecstatically thrusting both fists in the air. (source)

Group enthusiasm is not for me. I’ve been to the LessWrong Community Weekend, and I’ve been to EA Global, and each time, everyone was excited and there was always the stupid cheer at the end. I do like that this is a thing – enthusiasm is good! Group cheers increase the feeling of togetherness and community. I don’t want to suggest dropping this custom. Yet, every time I’m part of this custom, I cringe and I can’t cheer or shout or wave my fists around and, instead, I start feeling anxious, sad, and not part of the group. And if I’m not really careful, I always end up in a sadness/depression spiral. I want to change that.

I was wondering why exactly it is that I get anxious and sad when the people around me are extremely happy. This seems contradictory. When people around me are sad, I get sad; when people around me are happy – but within reason – I get happy. It’s only when we get into the extremely happy territory that my happiness drops. So it looks like this:

when it should look like this:

Let’s isolate the problem:

The interpretation of this that currently feels most right to me has two parts. One, being loud, excited, enthusiastic, isn’t me, therefore, trying to pretend that I am these things feels inauthentic and wrong. Two, not being able to participate in group behavior when (a) it is expected, and (b) I want to, makes me feel excluded. So the feeling is, group cheers is not something Nino does; group cheers are something members of this group do; therefore, I do not belong to this group.

I remember different situations where I deliberately played a specific role in order to nudge my identity in a certain direction. For example, before I started my TA job, I was Not A Person Whose Job Involves Leading A Group Of People. Deciding to change that was uncomfortable and anxiety-inducing. A person who could do a job like that was not who I was, but it was who I wanted to be. So I forced myself into the role and, knowing that I would be easier not to do this alone, I had someone sit beside me as I sent out the email asking for the job. Once I’d done that, I’d become a person who can, at least, ask for such a job. Once I’d experienced doing a thing a person like that would do, actually showing up to sign the contract and then going to the classes was much easier because I could just let subconscious consistency effects play out. “Well, I did ask for this job. If I’m the kind of person who asks for a job like this, that must be because I think I can do it, and that must be because I probably actually can do this.”

I didn’t used to be the kind of person who enjoys dancing badly at parties. I’m still not 100% comfortable doing it, but ever since I put myself in a situation where I was forced to participate and was in a good mindset to accept that I was actually doing it instead of “I’m forcing myself to do something that is not Something I Do,” dancing has become much easier for me – so much that it can even be enjoyable.

So: I alieve that I’m not a person who can shout, or cheer, or be loud and excited about things. Therefore, getting into situations where this behavior is expected of me, will make me anxious. Knowing that, the solution seems relatively simple. I need to practice shouting, and cheering, and being loud and excited about things. I need to do this as long as it takes to become less painful and aversive. For this to be successful, I need to be in an environment that feels safe to me. My best guess for what that environment would look like is: a group of 2, 3, at most 4 people, including me, in a place where no strangers can easily hear loud noises. Being inside a regular apartment with neighbors above and below would make this considerably harder. Doing this on my own won’t work because I can’t make all the noise myself. Turning on loud music or sounds from the internet won’t work because the sounds need to be human made. As I mentioned, concerts won’t work because I don’t feel safe enough around the other audience members. Open spaces, outside, far away from any buildings would work well because you could start out by standing far apart and shouting things at the other person. Since, in that case, shouting would be necessary to transfer information, it wouldn’t feel as aversive. From there, you could slowly move closer together while keeping the volume high.

Once I’m more comfortable with shouting, we could move on to loudly displaying enthusiasm by saying, “Yay!” and “Woo!” and “Yes!” and “Victory!” really loudly, and waving your fists around and whatever people do.

I predict that, if I do this a few times, group enthusiasm will be significantly more bearable for me in the future, which would make lots of social interactions easier; and that would be extremely useful for my life in general.

I also predict that I’ll feel really really silly doing all of this. (Even more silly than I felt writing it.)

(Comment or email me if you want to be my shouting partner. This could be lots of fun.)

1/3: Just The Way Things Are

28 December 2015

Modified: 13 June 2016

514 words

[Part 1 is about a feeling about the world. Epistemic state: Maybe I shouldn’t commit to writing blog posts about every thought that occurs to me while browsing Wikipedia. Part 2, part 3.]

I decided I don’t like the term “laws of physics” to describe the way reality behaves. Calling them laws makes them sound optional1. Like, it would be really good if you didn’t break them because they are being enforced by the space police, but if you’re really clever, you can outrun the space police and break them anyway. But you can’t.

When you put two marbles down, and then you add two more, the fact that there are now four marbles isn’t a law you can break. It’s not something where some universal authority decides that this should happen by calculating 2+2. It’s just the way things are.

And so, when you hit the accelerator, there is nothing deciding to stop you from going past the speed of light. It’s just not going to happen. Look, for example, at Conway’s Game of Life. Because of the way the game is structured, there is an absolute speed limit and there is nothing you can do to go faster than that maximum speed. And still, if you program a simulation of the Game of Life, you don’t need to add a rule preventing things from exceeding the maximum speed. Like two marbles plus two marbles being four marbles, the speed limit is just a consequence of the structure of the universe.

But! For the people in the Game of Life, it won’t be that obvious, because they don’t see the game board. They see the contents of the cells, but not the cells themselves. So they might wonder why the speed limit exists and they might think they can somehow circumvent it. It’s only when you see the game board that you get an intuitive understanding about why these laws exist and why it’s not forbidden to break them, but a logical impossibility.

This transfers to the real world, too. There have been people who tried to build perpetual motion machines and made plans to go faster than the speed of light and theorized about superluminal neutrinos. Thinking about the laws of the universe as something that logically follows from the stuff the universe runs on, rather than the laws being rules that exist explicitly and are somehow enforced, makes impossible things feel more impossible  – you won’t trick the universe into giving you energy by building a perpetual motion machine that is so complicated that the space police doesn’t notice you’re stealing energy.

I thought that was an interesting intuition.

Footnotes

  1. Weellll, this is arguably inaccurate, but the point is less about the terminology and more about the intuition, so whatev.

Waves of confidence

4 October 2015

541 words

There seems to be a distinct and relatively predictable pattern to my confidence/comfort levels when I’m meeting new people and I’m wondering whether this is a common experience.

Usually, before I get to know someone (except when they’re known for doing something really interesting), it’s hard to build an interest in them. Like, I can feel completely lonely and desperately want friends and still, when I think about who to talk to, just everyone new will seem like the dullest person in the world. So, if someone happens to talk to me, the stakes are low and I’m not anxious. After one or two conversations, I manage to internalize that I’m talking to an actual sentient being and I start becoming really excited about talking to them.

If it turns out they like me, and we stay in touch for a few days, there comes a point where my brain is like, “oh wow, this is turning into a thing. Are we friends now?” And then I notice I’ve told most of my backstory and I start running out of things to say. So I’m trying frantically to find things to say and it’s just not working and it’s like, “oh gods, do I have nothing interesting to say? How can I keep the other person from losing interest?” And I get anxiety attacks and the only thing that can help is them talking to me, but they don’t because they don’t have time to talk to me like all day which is what I’d need to feel safe, and I don’t know what to do.

Eventually – if contact doesn’t stop, that is – I realize it’s okay that I sometimes don’t have anything profound to say and I get into a groove of just speaking whenever I do have something to say. I feel more or less certain that the other person cares about me as a human being, and that I won’t mess that up by saying one wrong thing, so I manage to relax and I get less anxious.

But then I realize – wait, I’m much more confident now than I was in the beginning! Maybe they only liked the shy me, or they only liked me because they didn’t get the full picture because in the beginning I was all quiet and agreeable. So I get more anxious again, and I get quieter. But then I feel like I’m holding myself back and I’m boring because I never say anything so I still try to be confident and say things and be courageous and settle into kind of a back-and-forth of being more vocal vs being more agreeable.

And after a while I get used to that and I feel better saying things. And eventually, after years and years and more sudden dips in my courage, the connection turns into a stable friendship and I don’t need to be so scared of sending them cute cat pictures anymore.

Does anyone else have a similar experience, or is it more common for confidence levels to rise linearly with time, or something else?