Tuesday, November 20, 2018

Abstraction and context


So, I gather that making a decision based on theory is abstract while making and applying the decision would be context. I thought of an example after reading his post. The upper management makes the decisions about packaging and maintenance of the Client's database which is an abstract view of the implementation. But when we, as database analysts, apply the strategy we find out different results and that is contextual.

Would that be correct?

You are finding out what actually works. We tend to think that what upper management is doing is analysis and it is. But it is a certain type of analysis based on abstract methods of seeing reality. Numbers are the one that come readily to mind. Analysis is based on numbers in the offices above, more often than not, and the assumption is that those numbers represent reality and represent the whole of reality. But they are abstractions only. Do they never match reality? No, they can but some times or at least they don't represent the whole of reality. The ones though who are on the front lines are in contact with that reality and can see readily if it will work or not. They are the ones in context.

Reminds me of the old story of a man who is out walking at night and comes upon a woman on her hands and knees under a street light evidently looking for something.

"What's the matter," he asks.
"I lost my ring and I am trying to find it," she says.
"Well, where exactly did you lose it?" he asks.
"Over there by that bench," she says pointing to a bench 30 feet away.
The man amazed says, "Well, why are you looking for it over here?"
"Because the light is better."

A lot of design considerations are made in the abstract. Computer modeling and simulations have narrowed the gap some between conception and reality but it still exists.20

The Challenger Commission investigated the accident of the shuttle by that name but a lot of it was intended as a whitewash of NASA. The problem was that they appointed a man named Feynman, a physicist and Nobel Prize winner. The panel was given all the statistics on the O-ring seal, all the test data on it and the gist of it all was that the O-ring could not have failed.

Feynman took a piece of that O-ring, put a clamp on it and placed it in a cup of water sitting on the table in front of him. He took it out a few minutes later and you could see with your eyes how the O-ring would not return to shape at temperatures near freezing. It was that ability, the ability to return to shape, that the shuttle depended on to seal the rocket boosters. Feynman showed the world that it could not do it.20

What the panel had been presented were abstractions, Feynman gave them context.

We shouldn't overplay the context issue because you cannot ever come to any generalizable conclusion if you do not engage in some abstraction. But there is really no risk that people will think purely contextually; we are made up to think abstractly. The problem is that we tend to take this too far along the continuum and prefer abstractions to anything else. In other words, the risk is that we might think purely abstractly instead. But thinking abstractly in context would be the ideal.

If you can see when you do it it will be a real help in your thinking.

Abstraction and context continued


Scott

The Feynman and O-ring illustration quite fit the explanation of the difference between abstraction and contextual analysis. But is it possible to use the 2 approaches in combination?

It is possible if you think of them as falling along a continuum with abstraction being at one end and context being at the other. The further along you go in one direction the weaker the one is over the other. The problem is that we have gone to far in the direction of abstraction.

It might be put this way: Generalize from context. That would mean looking at the context and generalizing from there. Or maybe this way: Check your work with the facts on the ground. That would mean making your assessments with abstract methods but then checking the results against what is actually happening on the ground.

Problem with that though is that you the result can be so out of context that there might be a tendency to look for facts or use what are essentially assumptions to fit it.

With some disciplines, abstract methods are all you have. But with others, there is a lot you can learn by actually going and looking or doing an interview or by watching how people interact with the thing. You can learn a whole lot about things by just doing that.

Monday, November 19, 2018

What we know

How do we know anything we know.

Do you actually know what you think you know or are you relying on what someone else knows about the subject? Do we have to be there to actually know something?

You could take this skepticism a bit further along couldn't you? Will the sun come up tomorrow? If you say yes and your criteria for knowing something is that you are there and experience it yourself, then how could you say that it will? You aren't there in the future right now to be able to make that statement are you? And if you say that the past is the key and that you were there for past incidences of the sun coming up, how is it that past incidences of a thing happening necessarily means that the thing will happen again? If a chicken is fed every day at a chicken farm, wouldn't his expectation be that the very next day he will be fed again? That day just might be the dressing out the meat day--the kill it for food day. This is the induction problem that Hume identified.

This is a problem not only for history but also for just about every piece of knowledge that we say we know. Was the atom split? Do you really know? Have you ever seen one split? How could you tell if an atom is split even if you were there to experience it? And if you see the mushroom cloud from an atomic explosion, an explosion, by the way,which not many have seen in person, can you be sure that it is because of the splitting of the atom? Aren't you taking people's word for that? The same thing can be said about: anatomy. How many have ever seen a human heart in person--pictures don't count because they can be falsified; geography--how do you know that there is such a thing as a France or a Russia?; the birth of babies you haven't seen yourself; illness--"That cold is caused by a virus" says the doctor. How do you know? Have you ever seen a virus? Does a microscope count? Isn't there an assumption that the microscope actually lets you see microscopically small things? Do you know that is true? And even if you have seen a virus--how do you know that the cold is caused by that virus or a virus?;political history--how do you know that George Washington defeated the British, that there was a Revolutionary War in the first place, or that there was even a "British" or a George Washington? His home is there with his pictures in it but how do you know that it was really his home? or how do you know there was a Constitutional Convention or that there was even a signing of the Declaration of Independence at all? If we have a document does that prove that it was in fact signed as is purported to have happened?; psychology--"The brain is the seat of the mind." Have you ever seen a brain, in person that is?--pictures can be falsified and if you see a brain without having seen it in relation to a person, that is, having been exposed from a cutting into the skull, how do you know that it in fact comes from the skull?; love--how do you know that your husband or wife loves you? You can't get into their minds can you to know?; or any other thing that we do not know from firsthand experience, which is about everything we know.

The point is that we have to rely on others, and to some extent on the honesty of others for the very knowledge that we have. If we had to rely on firsthand experience for our knowledge, that knowledge would be very limited.

This means that all of the information that you have learned in school has been information that you yourselves have not verified or experienced firsthand. All of it. (If you say, "the same thing happened to me at work that I learned about in class" is that the same thing as being able to generalize about it? The knowledge you have learned is generalized and generalizable to most other situations. If you weren't there for these other situations then you can't say firsthand.)

What does this mean? Does it mean that we should discount everything we know that we have not experienced? No, but it might mean that we should not treat everything we know as the once-and-for-all truth. We should test what we know as we go along. If it keeps happening, or recurring then we can be more confident that it is the case. Or if it keeps showing up we can be more confident.




Of bullet holes, bombers and survival bias

It was World War II and Europe was under Nazi control. The American military and the Allies were trying to destroy the German industrial might in a bid to hasten the end of the war so bombers struck targets from the air.
But they couldn't fly into German controlled airspace unopposed. German gunners shot to kill and enemy fighters, closer to their supply lines and able to fly multiple sorties against the same Allied bombing run, closed with them in the air. With bombing missions pushing further and further into Germany, they exceeded the ability of Allied fighters to escort them in and back out. This left only the guns they had on board to defend them.
As a result, these bombers were shot down at high rates. With some missions, it was as high as forty percent.
This was day of Strategic Daylight Bombing and it was dangerous. In its beginnings, in 1942, it quickly became apparent that it was statistically impossible for flight crews to complete twenty-five missions over Europe before being shot down.
That was too much so the military had a problem: They had to better protect these bombers.
But how could they do it?
A long range fighter would be a solution. But they wouldn't have one until later in the war with the introduction of the P-51.
So that was out.
That left only one other option: Armor.
Armor the planes.
This was a simple sounding fix. But it wasn't simple at all. It had problems.
First of all, you couldn't just armor the whole airplane. That made it more vulnerable. Full armor made the planes heavy and heavy meant they couldn't maneuver as well to take evasive action. And, since that amount of armor wouldn't make the bombers bulletproof or flak proof anyway, only more bullet and flak resistant, that meant that enemy gunners and fighter planes could get a bead on them, keep them in their sights for much longer, and concentrate their fire. The result would be more downed planes anyway.
The second problem was that these heavier bombers would use more fuel which meant less range. That increased the risk of ditching the plane before they made it home. Or there would be more of a trade off with the third problem.
Payload. Heavier bombers couldn't carry as many bombs. Less bombs carried were less bombs dropped on the targets so more planes would have to be flown or more missions sent to get a particular job done. And if payload had to be sacrificed to take on more fuel, even less bombs would be available.
The end result would be more planes sent out and more planes shot down.
So armoring the whole plane wasn't going to do it.
But what about armoring parts of the planes? What if they could find out which parts were taking the most hits and armor those? They could armor only those parts that needed it, the parts that were sustaining the most damage, the most vulnerable parts. Doing this would protect the plane and not add all that much weight so they wouldn't need to sacrifice maneuverability, payload or fuel consumption.
But which parts?
They didn't know but they had some data.
They had the planes that returned.
They could examine the damage on the planes that came back and see where they took the most hits. Those would be the spots to armor.
That's what the military researchers went out and did. They examined the returning planes and found that the greatest number of bullet holes and shrapnel damage were to parts of the fuselage and to the wings and tail.
That was where they'd add the armor.
They took this assessment to the top brass. But the top brass wanted confirmation. They wanted the Statistical Research Group (SRG) to weigh in on it.
So the researchers took their data and their conclusions to the Statistical Research Group, to the man many considered to be the smartest of the smart people in that group.
That man was Abraham Wald.
Abraham Wald was a Jewish mathematician who was born in what is now Romania of the European Union but what was then the Austro-Hungarian Empire. He went to the University of Vienna for his academic studies and earned a degree in mathematics.
But it was the 1930s and Austria, like the rest of the world, was in economic distress. Things were tough all around and jobs were hard to come by. But what made it worse for him was that he was both a foreigner and a Jew.
German ideas of racial superiority and purity marching hand in hand with German nationalism “uber alles” were making an appearance then and anti-Semitism had had some respectability since at least Wagner in the 1800s. And maybe even before that.
There were some in Austria at the time who looked to Germany for their inspiration and many of them aspired to unify the country with the German fatherland so anti-Semitism was rife.
That made things difficult for Abraham Wald. But a friend who worked in the Austrian Institute for Economic Research, intervened. This friend, Oskar Morgenstern, who would later immigrate to the United States and help develop game theory, put in a good word for him and the Institute hired Wald.
In 1938, when the Germans invaded, Wald fled to the U.S. to escape Nazi persecution, a persecution that would begin in the ghettos and end for millions of Jews in the concentration camps with the Nazis' unspeakable Final Solution.
He took a position first at an economic institute in Colorado Springs and then left there for a position at Columbia University in New York.
When World War II broke out, Wald became a member of the Statistical Research Group.
The Statistical Research Group was a cluster of very smart people gathered together to help solve the problems of war fighting. According to Jordan Ellenburg, the SRG “where Wald spent much of World War II, was a classified program that yoked the assembled might of American statisticians to the war effort —something like the Manhattan Project, except the weapons being developed were equations, not explosives.”
A number of other groups were housed in the same building with the SRG. They worked on things like the optimum maneuvers for fighter pilots in a dogfight, protocols for strategic bombing, Columbia's part of the atom bomb project, among other things.
Pretty important stuff in their own right.
But, according to Ellenburg: “[T]he SRG was the most high-powered, and ultimately the most influential, of any of these groups...
It was the 'the most extraordinary group of statisticians ever organized, taking into account both number and quality...' This was a group where Milton Friedman, the future Nobelist in economics, was often the fourth-smartest person in the room.
“The smartest person in the room was usually Abraham Wald. Wald...functioned as a kind of mathematical eminence to the group. Still an 'enemy alien,' he was not technically allowed to see the classified reports he was producing; the joke around SRG was that the secretaries were required to pull each sheet of notepaper out of his hands as soon as he was finished writing on it” because he wasn't cleared to read it.
It was to this Abraham Wald that the military took their conclusions and supporting data.
Wald looked at the data and came back with his own recommendation.
It was not the recommendation the military was looking for.
Wald told them they had got it wrong, that they were going to armor the wrong parts of the bombers. They were going to armor where the bullet holes were. But, he said, they should do the opposite; they should armor the places where the bullets holes weren't.
The places where the bullets holes weren't? This was a stupid answer. These military men had the data and knew which parts of the airplanes were sustaining the most damage—how could they possibly be wrong about that?
They were wrong, Wald insisted, because they were operating under an assumption that was wrong. They were assuming that the returning airplanes were a representative sample of all the planes that had sustained damage. But that just wasn't true. What they actually had was only a sample of those planes that had returned, the planes that had survived.
These were the planes that had been hit but were still able to make it back home. The data they had came only from those planes.
But the planes that had not survived, the ones that had been so damaged by enemy fire that they could not return?
According to Wald, they had sustained damage in the places where the bullet holes were not found on the returning planes. If you could sample these planes, the downed planes, he argued, you'd find that the damage would be on the other parts of the plane, the cockpit and the engines.
The military had in fact made an assumption when they thought they were relying on facts and that assumption wasn't reasonable. The enemy wasn't shooting at and hitting only the wings, fuselage and tail of these bombers. This was especially true of the anti-aircraft fire which was designed to explode and rip the plane open with shrapnel. They were shooting at the planes themselves, at the whole plane, not specific parts of it. That fact would have created a more even distribution of bullet holes and shrapnel damage across the whole airplane if the downed planes could be examined.
Armor the places where the bullet holes weren't. That was Wald's recommendation and he ended up convincing the military. They armored where the bullets weren't, the engines and cockpit, and the purported result was that more planes survived.
What Wald saw was what is called survival bias. Survival bias is the error that occurs in thinking when attention is focused only on the people or things that have crossed a certain selection threshold; everything else is simply ignored. Only those that cross that threshold are considered relevant to the issue, not those that do not cross it, that do not survive.
Survival bias is a cognitive bias, a bias that can skew the results of critical thinking. It not only affects the military but it can bias any thinking in any area. That means it's a potential problem for business, manufacturing, finance (including personal finance), economics, investments, marketing, the study of history, medicine, and the fields of architecture and construction, as well as any attempts to enlighten you about the habits of highly successful people, for instance.
But notice what Wald did here. He saw the survival bias but to find a solution to the military's problem he had to make some assumptions himself. The military's assumption wasn't any good but he himself couldn't get away without making assumptions either.
His assumptions, however, were more reasonable.
Let's look at them.
The first assumption he made was that the greater number of planes were being downed by enemy fire rather than by mechanical failure or pilot error. In a war where the enemy was shooting back this was a reasonable assumption. It was more likely that enemy fire was the cause of the downed airplanes and not a failure of that plane's systems or pilot mistakes.
Wald's second assumption was that the bullet holes would be more evenly distributed along all parts of the plane if all the planes could be examined.
Again this was a reasonable assumption. It's like I said before, the gunners were shooting at the planes not at specific parts of the planes. That would mean that bullet holes would be found all over if the planes were considered in the aggregate.
Without both of these assumptions, Wald couldn't have come up with an answer. This just goes to show that assumptions can be useful as long as they are made knowingly and they are made with the best approximation of reality that you can come up with at the time. But assumptions are often rejected as something that will only make an ass out of you and me.
Don't believe it. But we'll talk about this more later.
For now, though, count the bullet holes. But make sure you don't just count the ones that survived.
I'm Scott Clark. And this has been On Thinking.