Practical Cybersecurity, part 3 – Experience
Whenever people learn new information, they do it in a way that fits in to their current experiences of how they view the world. There is a children’s book called Fish is Fish. The book is about a fish who lives in the ocean and wants to see the rest of the world, so he asks his friend Frog to venture out on land and report back to him. Frog agrees and goes to see the rest of the world.
A couple of days later he comes back and tells Fish what the world is like. When Frog returns, he tells Fish about all of things he saw. He saw birds in the air, dogs on the ground running around, and large buildings where people would go into and out of. Fish, however, imagines these things according to his own experiences. A bird is a fish with wings, a dog is a fish with feet and buildings are large rocks that fish normally dart in and out of. Fish models this new world after the world he is familiar with.
Similarly, for children, their model of the world is that the world is flat. When told that the earth is round, they picture that it is round like a disc. When told that it is round like a sphere, they picture a disc within a sphere. The children are not stupid, only ignorant. They do not have the knowledge to be able to change their model of the world, but instead fit this new knowledge according to what they do know.
People’s minds are like a ball of yarn and their existing ideas are like the strands of yarn, some unconnected, some loosely interwoven. Instruction is like helping students unravel individual strands, labeling them and then weaving them back into a fabric so that the understanding is more complete. Later understanding is built upon earlier beliefs. While new strands of belief are introduced, rarely is an earlier belief pulled out and replaced. Instead of denying existing beliefs, teachers need to differentiate them from actual beliefs and integrate them into more conceptual beliefs.
In the classroom, if prior beliefs are not engaged, people revert back to their preconceptions after the test has been taken. An experiment was performed where the subject was tested on memorizing long strings of random numbers. At first, the student could only remember about seven numbers, but over time managed to get to 70 or more. He did this by breaking down the numbers into chunks (give example). However, after the researchers tried to get him remember letters, he reverted back to only being able to remember about 7 characters.
This demonstrates why random password advice is next to useless. People can only remember about 7 random characters, and nobody in real life has to remember random strings of anything. The things we do remember are song lyrics, names of people, events, television shows, comedy patter, and so forth. They are things that have emotional meaning to us, and therefore we can better remember them. They are not random characters but instead are characters (words) that hold meaning and therefore can be recalled.
It also explains why people fall for fake A/V software. People are used to being told by the industry that they need it. They already have an existing level of trust built up with the software industry about security practices. We can tell them that they shouldn’t click on pop up links or be suspicious about adverts for A/V software. They might say “Yes, I will be careful in the future” but we haven’t engaged their prior beliefs that:
They need A/V, and
That they trust us to tell them the truth.
When a scam crosses their eyes they revert back to the belief that they need A/V, the person behind the scam is telling them the truth, and they click to install it.
Instead of preaching to users that they have to be careful about scams, we should integrate it into the message that we already teach – they need A/V software and they should only ever get it from trustworthy sources. This uses beliefs they already have (I need A/V) and adds a new strand of yarn (get it from a place I trust). We then must educate users about who is trustworthy.
Students remember more abstract concepts better than a contextualized one. We must teach that users must only download A/V software from trustworthy sites. We do not necessarily start off by saying “Look for the https” or “Is it from a site I recognize?” Those concepts come later. The expert is able to take the more general concept and then find the specifics. In this case, the expertise we want the user to acquire is to first stop and think “Is the site I am downloading this from trustworthy?” What comes next to the expert is the question “How do I tell that this site is trustworthy? Oh, it says https://avg.com! I know that the ‘s’ means that it is secure, and I have heard of AVG!”
This relates to my previous point about expertise. An expert can draw from large bodies of information and they are able to recall organized knowledge and apply it to new situations. Someone who has learned about fake A/V could now see a pharmacy site. They have learned to ask the question “Is this Internet site trustworthy?” A user would look for signs to see if the site is to be trusted or not. Does it use https? Do they recognize any logos or the URL? Their preconception in this case is that the Internet is a place to buy things. But they have learned that they should only buy things from trustworthy sources, otherwise don’t do it. The abstract concept was added to the ball of yarn and is applied to the new scam.
When we engage pre-existing beliefs, we improve transfer.