Pin It
Caseythoughts If I was asked by the census-taker about my current state of employment, I would look for the box marked 'semi-retired', if such a box exists. I put my fifty years in, starting as a paperboy at fourteen, and I would guess I missed about three weeks of work over that half century. Not much different than many of my contemporaries, although I daresay some of my employment history was spent in some weird and unique jobs. You should see the looks I get when I tell a questioner what I did in my first enlistment in the Army 1970-73.

Semi-retired because no one I know can live on social security retirement checks alone, even in this haven of affordable housing and reasonably priced 'fast food'. Hah. So, loathe to crack into the retirement funds this early into the second half of my life, I do 'temp' work, some of it interesting, some of it meant for younger bodies, and frequently food for some entertaining thinking.

Thus, in my newest 'temp' job (which pays pretty well and keeps me out of trouble) I have had to find my way around a new, modern seven story office building. Each floor has a unique floor plan and I needed to find my way around cubicles, offices, conference rooms and what appeared to my eyes and brain as endless possibilities and wandering to find what I was looking for. My 'training' had been truncated to one day, so I was on my own after day one. Of course, I would draw bemused looks from the cubicle and office residents when they saw me wandering past them for the second or third time. I managed, barely, to complete my assigned tasks, but on the second or third day I started thinking about how artificial intelligence (and programming of rats in a maze) actually worked. Amazing, if you think about it, where the brain, unchained, or unleashed, can wander. Basically, AI is telling a machine/'bot'/computer, by way of algorithm, 'where to go', and all the different possibilities it has at hand, so to speak. Thus, allowing some leeway for known and unknown possibilities and outcomes to interact and 'find' a solution to the problem presented. Simple, right?

I realized that in my new job, upon exiting the elevator on each floor, I instinctually turned to the right. I was aware of this turn, but certainly not aware of any particular reason for this. When I realized this, I decided that I would accept it as some sort of subconscious programming, and made a strong effort (or not effort?) to 'not think', and allow this unnamed programming to take over. Some of my acquaintances might say that me not thinking is habitual and normal behavior. Harrumph, as Mel Brooks might snort. I have to report that when I did this, I found my way around in a manner which resembled a mouse in a maze would consider applaudable. I know I called this 'not thinking', but in reality I think it was on the order of how simple AI must work, if indeed we can call artificial intelligence simple. Some have called the opposite of AI 'organic intelligence'; fair enough. Think of a mouse finding the cheese, to stretch the point almost to breaking.

Am I making any sense, here? There is, after two weeks, an absolute lack of stress on my daily rounds in this office building. I have allowed this mental 'map' of each floor to take over and I am getting the job done easily. But if you asked me to actually write it down, or explain each turn along the way to another human, I am sure I could not. I also have noticed that if I turn left out of the elevator, thus reversing the mental pattern, I can hear mental alarm bells, with a robotic voice saying 'Danger, Will Robinson', or 'does not compute'. You might describe this as a human Roomba. (Refer to Dan Veaner's video editorial a little while ago).

Is this really a simple version of my brain, or a mouse's brain in a maze? Drop the snark, please. I also read recently about a phenomenon that is, in my mental wanderings about thought and intelligence and algorithms, called 'cheat code'. This is where researchers working with artificial intelligence have caught 'bots' cheating when looking for tricky solutions, or whatever the task may have been as originally presented. In another way to phrase it: algorithms finding 'loopholes' in their programs that enabled a clever solution, or sometimes a frightening solution.

To wit: Wired Magazine (August 2018): Humans teaching a robot 'gripper' to grasp a ball (important work in the development of mechanical hands for amputees) accidentally enabled it (or it seemed to think of it on its own?) to exploit the camera angle documenting the experiments, so that it appeared successful, even when it was not actually touching the ball (think of a child hiding something from Daddy or Mommy). Try this: a four legged robot was challenged to walk 'smoothly' by balancing a ball on its 'back'. Instead, the bot trapped the ball in a knee joint, then continued lurching along as before. Misinterpreted code? Or a new way of thinking what the 'solution' might be?

In other words (and another example or two in a moment), we told the 'bot' with a series of algorithms what to do, and allowed it to figure out a solution. After all, that's what advanced computing is all about, n'est pas? Allowing the computer to use its 'brain' to figure it out. Those who are now remembering the computer called Deep Thought in Hitchhiker's Guide To The Galaxy, which came up with an answer to Life, the universe and everything, saying 'You're not going to like this...' can be forgiven their cynical chuckle (and reaching over to the bookcase to see if they can find their copy with 'Don't Panic' embossed on the cover).

It turns out, not surprisingly, that 'mathematical optimization empowers our advanced bots to develop shortcuts that humans have not deemed to determine off limits. Teach a learning algorithm to fish, and it might decide to drain the lake.' (Wired magazine, August 2018).

Consider an algorithm recently told by university researchers to figure out how to save energy. Simple? Yes, unless humans consider every possible avenue of approach and monitor the algorithm for ventures into 'off limits' answers. Can we actually do that? And if we could, then what would we need with a robot/computer? Would the bot consider a blackout a logical answer to the problem? Of course it very well could: we told it to find an answer in our belief that the algorithm had a superior ability to compute, figure, and answer. The computer did exactly that: the answer is rolling blackouts. Deep Thought said the answer was '42', but no one actually remembered the original question.

A researcher at Uber, in the forefront of autonomous vehicles which will populate your Blade Runner future, wrote a paper recently that documented 27 examples of algorithms doing unintended things, and opined that our current human engineers may need to recognize that 'coaching' the system may be necessary to collaborate with algorithms instead of commanding the system as the system recreates itself to answer sticky human problems. And this re-engineering, or recreating as the case may be deemed, may require an entirely new set of intelligence and character traits. For the humans, you see. Seems Isaac Asimov may have been terribly prescient in the stories of I, Robot, where a robot psychologist had to solve intricate problems concerning robotic intelligence and robot 'ethics'. I mean the book, folks, not the poor adaptation into a movie.

One more example of algorithms outrunning their human masters? It was reported that algorithms had been developed to advance the development of a video game called 'Elite Dangerous'. The algorithm exploited flaws in the rules so it could invent dangerous new weapons not envisioned by the game's creators.

Now, I'm not so much afraid, at this point, of the same old laborious jazz about robots taking over. But, I do wonder if we are training our new robotic and mathematical engineers (human, for now) in some areas that might address ethics, mores, and unintended consequences, along with a human perspective on life. Call it a Liberal Arts mentality, even as I dislike the academic major. It is now fact that our colleges are finding a large and growing number of entering students are going for the computer sciences and related fields, abandoning the traditional fields of study which could help to broaden their outlook and just, you know, humanize themselves.

Unintended consequences, indeed, both algorithms, robots and human history. History is replete with those who could be called, in retrospect, 'social engineers' (read: I'm sure I know what the world needs) who rose to the top of their country's political power structure and unleashed their programs with intended, and often unintended, consequences for the people of the world. And they didn't, in the last century or two or three need 'bots' or AI to do their bidding for mastery of the world. These masters in our history just wound up their human machine, waved a flag and said 'Follow me, I have the ultimate answer'. Makes me wish that our current crop of answer-finders and algorithm designers could also be required to take a few liberal arts courses, maybe some ethics courses, along with human history readings, just to round out their current quest for a more perfect world. And maybe keep the rest of us protected from their unintended consequences.

v14i36
Pin It