Non-Player Characters

December 24, 2012

Here’s an interesting idea. This article mentions “non-player characters” in the context of a role-playing game, and proposes something rather unsettling:

Many of us approach the other people in our lives as NPCs.

I’ve been thinking along similar lines. People often imagine strangers as incidental scenery, part of the social environment. This is understandable given the limits of human knowledge – there simply isn’t enough time or mental capacity to understand very much about very many people. However, we often forget that this perspective is only a necessary convenience that allows us to function as individuals. For example, if you’ve ever been in a rush to get somewhere on public transportation, you’ve probably felt that bit of guilty disappointment while waiting to accommodate a wheelchair-bound passenger. Here comes some person into my environment to take another minute of my time, right? If you use a wheelchair yourself, this delay happens every time you catch a ride, and that frustration simply does not exist. If anything, I would imagine that disabled passengers feel self-conscious every time they are in a situation where their disability affects other peoples’ lives, even in an insignificant way.

Has this always been true? Probably to some degree, but the modern media environment seems to especially promote it. Good fiction writing communicates the thoughts and motivations of relevant characters, unless they are complete unknowns. This means that any meaningfully observable character has some kind of hypothesized history and experience informing their participation in the story. Film is different, in that a script can describe an “evening crowd” in two words, but the realization of that idea can involve hundreds of extras, living entire lives and working day jobs that barely relate to their final appearance on the screen. We can assume that their real lives intersected with the production of that scene on that day, but it’s really the only significance that their identities have in context.

With interactive media, the idea of a “non-player character” has appeared in many forms, and academics study how they can design the best (read: most believable) fictional characters for interactive environments. Here the limited reality of these characters is even more pronounced. In video games, non-player characters have lower polygon counts, fewer animations, and generally use less code and data. This is a consequence of the limited resources available for building a virtual environment, but the effect is readily apparent and forced.

Does this mean video games shouldn’t include background characters? Not really. What I’m suggesting is that we should be careful to see this phenomenon for what it is: an information bias in favor of the protagonist, which necessarily happens while producing media. It shouldn’t ever be mistaken for a relevant characteristic of the real world. This holiday season, when you’re waiting an extra minute or two for a disabled stranger, or expecting better service from a tired professional, remember that he or she probably has lived a life as rich and complicated as your own, and try not to react as if he or she is just some kind of annoying scenery. Whoever it is might return the favor, even if you never realize it.

Falsifiability

December 6, 2012

According to Karl Popper, any scientific theory must be falsifiable. What does this entail?

A falsifiable theory leads to predictions which would be invalidated by some conceivable observation. For example, Newtonian dynamics predicts that in a uniform gravitational field, two objects with different masses will have the same acceleration due to gravity. It implies that if we drop a feather and a ball bearing inside a vacuum chamber, the ball bearing will not fall faster, as long as the theory is valid. This is in fact what happens.

Newton’s theory was very successful at describing the celestial motion of the known planets, but in the 19th century it did not correctly predict the orbit of Uranus. This fact would have falsified the theory, or greatly limited its precision. However, Urbain Le Verrier knew that the gravitational field of an unknown planet could be pulling Uranus into its observed orbit, and predicted where and how massive such a planet would be. Astronomers pointed their telescopes at the expected location and discovered Neptune. If no planet had existed at that location, this prediction would have been wrong, and the inconsistency with Newtonian dynamics would have remained valid.

A planet orbits in an ellipse, and the point where it moves closest to the sun is called the perihelion of its orbit. This point gradually precesses around the sun due to gravitational forces exerted by other planets. The same Le Verrier compared observations of Mercury to the perihelion precession rate derived from Newton’s theory, and found a discrepancy of nearly half an arcsecond per year. He predicted the existence of another planet closer to the sun to explain his result, but no planet was ever observed and this problem remained open.

To explain other puzzling observations, Albert Einstein abandoned the Galilean transformations of Newton’s theory for a framework which uses Lorentz transformations in four dimensions. General Relativity describes gravitation as an effect of spacetime curvature. It simplifies to Newtonian dynamics at lower energy levels. According to this theory, the perihelion of Mercury’s orbit should precess by an additional 0.43 arcseconds per year, matching the observed value.

Still, the intuitive simplicity of Einstein’s theory did not automatically mean that it was a valid replacement for Newtonian dynamics. During the total solar eclipse of 1919, measured deflection of starlight agreed with the value derived from General Relativity, a highly publicized result. Decades passed before additional predictions were conclusively validated.

Global Whining

November 28, 2012

The scientific method is the greatest invention of the modern age. For centuries, its practitioners have transformed civilization using rational systems revealed through careful observation. Theories which have succeeded not by virtue of their popularity but because they correctly predicted unknown phenomena are especially awe-inspiring. However, predictions without rational justification or those vague enough to be confirmed by any number of observations should not earn the same recognition. I’m a huge fan of Karl Popper regarding falsification, the idea that a scientific theory must predict some observable event which would prove it wrong. This principle eliminates uncertainty regarding how specific a valid theory must be. Unfortunately, it has been ignored by some academics who claim to be scientists so that people won’t laugh at their ideas. You might have already guessed that today I’m targeting the low-hanging fruit of global warming alarmism. Prepare to be offended.

I won’t waste your attention picking apart the various temperature series, criticizing the IPCC models, or citing evidence of misconduct, not because those arguments have already been made by more qualified individuals, but because they shouldn’t even be necessary. Fundamental problems with any apocalyptic hypothesis make the whole enterprise seem ridiculous. This is what Popper says about scientific theory:

1) It is easy to obtain confirmations, or verifications, for nearly every theory – if we look for confirmations.
2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected […] an event which would have refuted the theory.
3) Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
4) A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

The scenarios published by climate modelers don’t qualify as scientific predictions because there is no way to falsify them – updated temperature measurements will inevitably correlate with some projections better than others. And fitting curves to historical data isn’t a valid method for predicting the future. Will the IPCC declare the CO2-H2O-feedback warming model invalid and disband if the trend in the last decade of HadCRUT3 data continues for another decade or two? How about if the Arctic ice cap survives the Summer of 2014? I’m supposed to trust these academics and politicians with billions of public dollars, before their vague predictions can be tested, because the global warming apocalypse they describe sounds more expensive? This riotous laughter isn’t meant to be insulting, we all say stupid things now and then.

Doesn’t the arrival of ScaryStorm Sandy confirm our worst environmental fears? Not if we’re still talking about Karl Popper’s science. Enlightened by the theory of Catastrophic Anthropogenic Global Warming, academics were reluctant to blame mankind for “exceptional events” only two months ago. They probably didn’t expect a hurricane to go sub-tropical and converge with a cold front as it hit New York Bight on the full moon at high tide less than six weeks later, because that kind of thing doesn’t happen very often. Informed news readers might have been expecting some coy suggestion that global warming “influences” weather systems in the rush to capitalize on this disaster. But in a caricature of sensationalism, Bloomberg splashes “IT’S GLOBAL WARMING, STUPID” across a bright orange magazine cover, and suddenly enormous storm surges threaten our infrastructure again while the seas are still rising slowly and inevitably, all because of those dirty fossil fuels.

I don’t mean to say that we should actually expect scientific integrity from a stockbrokers’ tabloid, but Mike Bloomberg has really sunk to a new low. He spoke at TechCrunch Disrupt a few years ago and seemed like an average business-friendly mayor, not a shameless propagandist. I guess the soda ban was a bad omen. It’s a bit discouraging to see another newspaper endorse the panic, but then the organizers of our climate crusade have been pushing their statist agenda on broadcasters for a long time.

On Sunday, the New York Times doubled down with this ridiculous melodramatic lament, written by one talented liberal artist. Where’s a prediction about the next exceptional event, folks? Is it going to be a tornado or earthquake, or does it matter? Are there actually any rules for preaching obnoxious hindsight to believers? Can anyone suggest an observation that would falsify the theory?

What will the temperature anomaly or the concentration of carbon dioxide measure in ten years? How about one solid date for the eradication of a low-lying coastal city? If you must predict the apocalypse, it is only somewhat scientific if you can rationally argue for a deadline. And the science is only settled when the world ends (or doesn’t end) on time.

Plus ça change, plus c’est la même chose. Happy 2012.

Singing

October 9, 2012

If you like music at all, take some advice from me and sing whenever you have a chance. It’s good for your soul, whatever that means. I’ve played music for many years, but I recently discovered how much singing affects my mental health. Live shows don’t count, because the crowd and the amplifiers drown you out. Sing when you’re alone in a quiet room, and fill it with your voice. Sing whenever you’re driving alone. Sing at work, as long as you aren’t disturbing anyone who can punish you. Just sing – you can probably play your own vocal cords even if you can’t play another instrument. If you sound awful, don’t worry about it. Nobody is going to harass you for singing poorly in private. Many people are too insecure to try it at all.

I’m saying this because something wonderful happens when we hear ourselves sing. It helps the brain. Matching pitch forces us to listen to the sounds that we produce, and quite possibly to produce sounds that we are not entirely comfortable hearing. This is a good thing! It teaches us to be more comfortable whenever we use our voices, and more confident in general. The sooner you get accustomed to hearing yourself make unfamiliar noises, the sooner you will be able to make noise in public with absolute confidence. Sometimes, that is absolutely necessary. Why not be prepared?

Degrees and Freedom

August 6, 2012

Here’s my idea of a good math lesson. I want to explain Euler’s formula, the cornerstone of multidimensional mathematics, and one of the truly beautiful ideas from history. In school this formula appears as a useful trick, and is not commonly understood. I think that is because students are denied enough time to wonder what the formula actually means (it doesn’t describe how to pass an exam). Here is Euler’s formula:

e^(ix) = cos(x) + i*sin(x)

This idea was introduced to me after a review of imaginary and complex numbers. Once the history and definition were out of the way, we completely freaked out at the idea of putting ‘i’ in the exponent, then practiced how to use it in calculations. I might have had a brief moment of clarity in that first class, but by the AP exam Euler’s formula was nothing more than a black box for converting rectangular coordinates to polar coordinates.

Many years later, I came across the introduction to complex numbers from Feynman’s Lectures on Physics, and suddenly the whole concept clicked in a way that it never had in school. Explained here, I don’t think it is really that difficult to understand, but then I’ve already managed to understand it, so I’ll try to communicate my understanding and then you can tell me whether it makes sense.

We need to start by generalizing the concept of a numeric parameter. The number line from grade school is an obvious way to represent a system with one numeric parameter. If we label the integers along this line, each mark corresponds to a grouping of whole, countable things, and the value of our integer parameter must refer to one of these marks. If we imagine a similar system where our parameter can “slide” continuously from one integer to the next, the values that we can represent are now uncountable (start counting the numbers between 0.001 and 0.002 if you don’t believe me) but opening up this unlimited number of in-between values allows us to model continuous systems that are much harder to represent with chunks.

Each system has a single numeric parameter, even though the continuous floating-point parameter can represent numbers that the integer parameter cannot. In physics, the continuous parameter can represent what is called a “degree of freedom,” basically a quantity that changes independently of every other quantity describing the system. Sometimes a “degree of freedom” is just like one of the three dimensions that you can see right… now, but this is not always the case. Wavefunctions in particle physics can have infinite degrees of freedom, even though the objects described by these esoteric equations follow different laws when we limit our models to the four parameters of spacetime.

Anyway, the imaginary unit or ‘i’ is just some different unit that identifies a second numeric parameter. If we multiply an integer by ‘i’, we’re basically moving a second parameter along its own number line that same distance. Apply the “sliding” logic from before and we can use the fractional parts between each imaginary interval. If this sounds new and confusing, just remember that any “real” number is itself multiplied by the real unit, 1. Personally, I don’t think that the word “imaginary” should be used to describe any kind of number, because all numbers are obviously imaginary. However, this convention exists regardless of how I feel about it, and nobody would know what to put in Google if I used a different word.

Why do teachers use this system where one implicit unit is supplemented by a second explicit unit? Simple – it was added long before anyone fully understood what was going on. The imaginary unit was the invented answer to a question, that question being:

Which number yields -1 when multiplied by itself?

The first people to ask this question didn’t get much further than “A number called ‘i’ which is nowhere on the number line, and therefore imaginary.” If those scholars had described their problem and its solution in a different way, they might have realized some important things. First, this question starts with the multiplicative identity (1) and really asks “which number can we multiply 1 by twice, leaving -1?” Thinking about it like this, it soon becomes clear that the range of values we can leave behind after multiplying 1 by another value on the same number line, twice, cannot include -1! We can make 1 bigger, twice, by multiplying it by a larger integer, or smaller, by multiplying it by a value between 0 and 1. We can also negate 1 twice while scaling it up or down, but none of these options allow for a negative result!

A clever student might point out that this is a stupid answer and that we might as well say there is none, but we still learn about it because amazing things happen if we assume that some kind of ‘i’ exists. We can imagine a horizontal number line, and then a second number line going straight up at 90° (τ/4 radians, a quarter turn) from the first. Moving a point along one line won’t affect its value on the other line, so we can say that the value of our ‘i’ parameter is represented on the vertical line and the value of our first (“real”) parameter is represented on the horizontal line. That is, a complex number (a*1+b*i) imagined as a single point on a 2-dimensional plane. In this space, purely “real” or purely “imaginary” numbers behave just like complex numbers with zero for the value of one parameter.

Now think about the answer to that question again. If our candidate is ‘i’ or some value up “above” the real number line, it’s easy to imagine a vector transformation (which we assume still works like multiplication) that can change 1 to ‘i’ and then ‘i’ to -1 in this 2D number space. Just rotate the point around the origin by 90°. When our parameters are independent like this, exponentiation by some number of ‘i’ units is exactly like rotating the imagined “point” a quarter turn around zero some number of times. I don’t really know why it works, but it works perfectly!

We’ve seen that imaginary units simply measure a second parameter, and how this intuitively meshes with plane geometry. Now let’s review what is actually going on. Numbers multiplied by ‘i’ behave almost exactly like numbers multiplied by 1, but the important thing about all ‘i’ numbers is that they are different from all non-‘i’ numbers and therefore can’t be meaningfully added into them. The ‘i’ parameter is a free parameter in the two-parameter system that is every complex number. It can get bigger or smaller without affecting the other parameter.

Bringing this all together, let’s try to understand what Euler was thinking when he wrote down his formula, and why it was such a smashing success. He noticed that the Taylor series definition of the exponential function:

Exponential function and its Taylor series

Becomes this:
Complex exponential and its Taylor series

When ‘i*x’ is the exponent, because the integer powers of ‘i’ go round our complex circle from 1 to i to -1 to -i and back. Grouping the real terms and the ‘i’ terms together suddenly and unexpectedly reveals perfect Taylor series expansions of the cosine and sine:
Euler's complex exponential series

As each expansion is multiplied by a different free parameter, the two expansions don’t add together, naturally separating the right side of our equation into circular functions! We can just conclude that those functions really are the cosine and sine of our variable, remembering that the sine is an ‘i’ parameter, and it works! Because these expressions are equivalent, having a variable in the exponent allows us to multiply our real base by ‘i’ any fractional number of times (review your exponentials), and thus rotate to any point in the imagined complex plane. There are other ways to prove this formula, but I still do not understand exactly why any of the proofs happen the way they do. It’s not really a problem, because Euler probably didn’t understand it either, but I’d still like to come across a good answer someday. What I know right now is that any complex number can be encoded as a real number rotated around zero by an imaginary exponent:

e^(ix) = cos(x) + i*sin(x)

Here is proof that certain systems of two variables can be represented by other systems of one complex variable in a different form, and the math still works! Euler’s formula is a monumental, paradigm-shattering shortcut, and it made the modern world possible. I’m not overstating that point at all, everything from your TV to the Mars rover takes advantage of this trick.

St. Jobs

June 10, 2012

I’m sure there will be glowing biographies about Steve Jobs and his many accomplishments in time, but that guy deserves every single bit of the massive praise that is heaped upon him. Some of the most interesting comments come from the journalist disciples who all but compare Jobs to Jesus at every opportunity, and from the corresponding messiah-doubters who say that Jobs was nothing more than a savvy businessman who understood timing, manufacturing and product placement. However, other contrarians are understandably uncomfortable with his role in the commercialization of independent software and his control over the iUniverse, like a charismatic sort of software dictator type figure. Many of these people have Apple on their “modern hypocrites list” and might tell you so if the conversation wanders in that direction.

Here’s maybe the one valid way I could compare Steve Jobs to Jesus: Jesus was all about ideas that could outlast and defeat humans, no matter how powerful they might seem at the time. Steve made computers and computer systems that will outlast their owners. I can’t possibly imagine a day when my iPad (2) is any less useful or amazing than it is today unless it smashes, no matter what the next ten versions look like. We are going to have to explain to our kids that this is a weird new thing! Computers used to be rickety, noisy boxes with all these wires and different parts sticking out everywhere, and they used to break all the time when a competent engineer wasn’t available to keep things working! All you early majority consumers of a certain influential desktop operating system know exactly what I am talking about…

Somewhat ridiculously, the very approach that allowed Steve and Apple to end this massive problem with casual computing was his uncompromising, even autocratic management of the platform. It feels like I might be stepping on the dreams of the free and open software communities a bit here, but I think I’m starting to understand the actual logic in favor of Apple’s paradoxical mecha-fascism, if only because I program sounds and other fast things. Not every computer has the luxury of being some genius freedom-fighter’s personal data management device. Many computers have to control cars, and medical machines, and all those other things that can’t break or otherwise present an end user with some unpredictable software issue that needs debugging. When my grandma is trying to call me on video chat, it has to be the same way. That was his reasoning, I think, and I have to agree that it makes a lot more sense than it used to.

One day, when we’re donating these old tablets to needy kids or whoever, we might remember Steve by understanding what he wanted to create: a world united by its magical and powerful technology – technology we can use to do formerly impossible things, without losing all of our time in the process.

Comedy

May 23, 2012

Today I want to talk about comedy, because it is an absolutely amazing subject. The fact that an entire dynastic profession exists to make groups of (hopefully drunken) strangers laugh on command just seems kind of unbelievable. Clearly there is something deep and transformative about laughter, but what does it really mean when a person is compelled to laugh at something? For any aspiring jokesters out there, how can a comedian create this situation and get paid?

Well, in scientific terms, laughter is probably caused by something that behaves like a central pattern generator in the nervous system. These neural structures generate rhythmic output patterns without relying on any external feedback, so it is a bit strange to apply this concept to laughter (a person has to hear or see every joke, for example). However, the laughter usually happens only after a person gets the joke, at which point the “joke input” has ended in almost every case.

Therefore we should probably be conceptualizing laughter as an internal rhythmic feedback loop that can be started by some “funny” input. The challenge then is to define a “funny” input. I’ll pause for a second here so you can try that…

But wait! Doesn’t the very incomprehensibility of the challenge suggest something profound about how we should understand humor? Everyone knows that jokes are hard to write because an original comedian has to be the first person to notice that a certain thing is funny. The whole art of comedy revolves around having some of that uncommon and funny knowledge, and choosing to reveal it in the most entertaining way possible. Knowing this, is it possible to imagine something that all funny things must have in common?

Well, sure. They’re all “correct” in some abstract sense. Comedy is the process of being so profoundly correct that other people are compelled to laugh as soon as they realize what is going on. Us college-educated folk can scoff at low-brow humor, but almost any example of “bad” comedy still does reveal more than a few simple truths to more than a few tragically underinformed people, and therefore it can still make a lot of money. The fact that a thing is not funny to every person does not mean that it is not “funny” in some platonic sense. Somewhat disappointingly, there is no such thing. That makes good comedy very hard work, but at least we don’t ever have to fear the funniest joke in the world.

(From this perspective, slapstick humor is a special case where the truth being revealed is basically how badly it must suck for the victim…)

Generally speaking, this is not a new idea at all. A government document says this:

The American comedian Will Rogers was asked how he conceived his jokes. He answered: “I don’t make jokes. I just watch the government and report the facts.” See what I mean? Sometimes the truth is funnier than “comedy.”

Several Woody Allen bits are included as example one-liners, like this one:

I can’t listen to that much Wagner. I start getting the urge to conquer Poland.

It’s funny because it combines and reveals several truths in a clever and efficient way:

  • Wagner was a German imperialist.
  • Music conveys emotion.
  • Germany conquered Poland (and murdered millions of Jews) in World War II.
  • Woody Allen is Jewish.

The joke actually depends on the audience already knowing all of these things, and the “trick” is that he alludes to each in such an efficient and thought-provoking way, in the space of two short sentences. When we realize, all at once, the absurdity contained in the idea of a modern American Jew savoring hypnotic war hymns that ushered in the Second Reich, the effect is very funny for a lot of people, even if they don’t want to think about it!

I’m particularly interested in this method of “humor analysis” because it seems to emerge so naturally from a feedback-dominated model of intelligence. Laughter happens when a person notices something that is interpreted as “true enough” to activate an unconscious neural feedback loop, forcing them to externalize their acknowledgement and understanding. That is the sole evolutionary function of laughter, a phenomenon which almost certainly had a pivotal role in the building of every human civilization.

This is not saying that Adam Sandler is the greatest American ever, or even that we should all start studying Internet memes for the sake of science. But it does mean that we should take a moment and bow our heads in respect to every person who has ever wanted to make another person laugh, and in recognition of the great things they have accomplished for the sake of humanity. Because when a country of people stop what they are doing and start laughing (against their will) at the same idea at the same time, you can probably trust it a bit more than usual.

How would I define a “funny” thing? Funny things are true enough to make people laugh.

Here is someone else’s definition:

There is no simple answer to why something is funny… Something is funny because it captures a moment, it contains an element of simple truth, it is something that we have always known for eternity and yet are hearing it now out loud for the first time.

Variables

January 18, 2012

What is the most important thing that a programmer should do? The textbook answer, “comment everything”, only ensures a minimum viable reusability, especially when nonsense appears like:

//UTF-8 Kludge
or
//DON'T REMOVE!!!

causing unsuspecting developers to detour for what could be hours (are office distractions NP-complete?) and jeopardizing the sanity of everyone involved. Don’t get me wrong, great comments are indispensable works of art, but to be able to write them, you must first write a great program.

Then what is a great program? It depends on who you ask. Rails users might argue that a great program says everything it needs to say exactly once in unreasonably concise Ruby, and includes modular unit tests for 100% of the code base. LISP enthusiasts might praise a program for its mind-boggling recursive structure, and its unreasonable efficiency. However, in both of these cases, and always for any novice programmer, the most important feature of a great program is its consistent and correct use of variables.

Put another way: a computer program’s codebase is unsustainable if its variable identifiers can’t be interpreted correctly by developers. This means that calling a variable “temp” or “theAccountNumber” is always a bad idea unless it is actually correct to think of that variable as “the temp variable”, or the only “account number” that that particular program (or section of the program) uses. We are at a point where nearly every bottleneck I encounter in everyday software development is between my mind and an unfamiliar block of code. If there is a chance of confusing anything else with whatever you’re naming, it’s the wrong name.

What is the right name? That’s another question with a multitude of answers. CamelCase with the basic, accurate name of a variable is a good place to start, meaning that if I were to create a variable to hold the text of some person’s additional info popup, it might be a good idea to start with one of the following:

InfoPopupText
PersonInfoPopupText
SarahInfoPopupText

depending on the context, i.e. what the program (or section) is supposed to do. Most developers I have met use the first letter of every variable to encode a bit of extra information, like if I decided to use lower case letters for an instance-level variable:

personInfoPopupText

or possibly prepend a special character for private variables:

_personName

As with everything, the key is to use whatever makes the most sense to the people who need to work on it. If we are to sidetrack into a bit of philosophy, accurate naming is what makes all intelligent communication possible. The only thing that allows you to understand my word (“chicane”) is the fact that it has been used in public to refer to some persistent feature of the universe, and therefore anyone else can imagine what I imagine (or check Wikipedia) when I use it. This applies to all formal and mathematical language too: the only thing that is required to understand a given theorem is an accurate understanding of the words that it contains. Be careful, though: accurately understanding some of the words that mathematicians use is a lot harder than it sounds.

Are any non-programmers still paying attention? This part applies to you too. As the old 20th-century paradigm of “files, folders and windows” starts to look more and more passé, why not ditch that menacing behemoth that you call your “backup” and start organizing files by naming them correctly? (Spelling is important!) If you do that, just Gmail them all to yourself, and then they will be archived as reliably as Google is, forever. If you picked the right name for a file, you’ll know what to search for when you need it.