Why is the Truth About A.I. Risk so Hard to Find?


We keep hearing scattered reports of AI gone wrong… about experts with concerns… but just try to find a single compendium of the whole truth online. THIS appears thus far the only one… and for a reason.

 

by H. Michael Sweeney proparanoid.wordpress.com   Facebook   proparanoid.net 

Dateline, March, 18, 2019, from the Olympic Peninsula, in the shadow of Microsoft

copyright © 2019, all rights reserved. Permission to repost hereby granted provided entire post with all links in tact, including this notice and byline, are included. Quote freely, links requested. Please comment any such repost or quote link to original posting.
 

What you will learn in this post

  • The true analogous nature and behavior of A.I. is as unpredictable and potentially dangerous as might be alien visitors from another World;
  • that a lot of people talk about A.I. in media, but only offer glimpses for some reason;
  • what that reason might be (a technocracy driven conspiracy);
  • what the technocracy is;
  • of the many kinds of A.I. and A.I. risks;
  • how A.I. works and how that relates and contributes to such risks;
  • the many failures, unexpected results, and dangers thus far experienced;
  • why it is highly unlikely such dangers will be usefully addressed until too late;

Note: this is a long post rich with information critical to any useful conclusion about Artificial Intelligence (A.I.) If you are someone who likes to form or confirm your beliefs based upon headlines and blurb-like summaries or YouTubes without much factual substance, you will not like it, here. Reading time is 30-40 minutes, unless following some of the many resource links, which will delay further.

It is intended that the reader decide for themselves what is true and worth taking away. Please see About the Author at post end to test my credentials for writing this post with useful reliance; I am not without bona-fides.

 

Why this post?

Untitled-1-2

From Ghana News Online opinion: An A.I. a day… Medical A.I. benefits http://ghananewsonline.com.gh/opinion-ai-day/

I’m pretty sure the reader will find this the most comprehensive and useful article on the matter in lay terms, anywhere, short of a book — and the most complete in cataloging specific examples of A.I. gone wrong, and associated fears. Scattered reports of failures and forecasts of doom keep popping up here and there… but trying to find a comprehensive review is simply not possible. For some reason, no one who should is willing to confront it head on with the full revelation. But you can find many click-bait sites that like to talk of sci-fi SkyNet or realistic-looking humanoids taking over… things which are not impossible, but highly improbable… at least any time soon. We need balance.

I think I know why we are being kept in the dark and fed dystopian mushrooms and light-hearted reviews of things gone wrong, instead of full disclosure. While the reason might tend to put some readers off, my being right or wrong does not change the meaning or value of this post, though if right, you should definitely want to know. I speak of a cover up, and remind you that any form of cover up is, in and of itself, proof that a conspiracy of some sort exists. In this case, there is at least an implied cover up, due to the fact that information on topic is so scattered and incomplete, as if authorities and media are not actually trying to… or pretending not to notice. If you do not agree about my notion, that’s one more reason to read About the Author at page bottom.

There is a second component to ‘why,’ and it leads to the same conclusion. It regards the fact that few are talking about the potential disruption to almost every business market, workplace, financial and social facet of society at large… and as we have already experienced with most of us unawares (and explore, herein), politics. We are leaving the Information Age and entering into the A.I. Age, where people who look for the next big thing to bank on simply take whatever is their current thing, and add A.I. to make the thing a super-thing (or perhaps, not so super). Like the risks A.I. represents, if we look, we see no one source talks seriously about what all that disruption of ‘status quo’ might really mean; total chaos capable of toppling any empire, including governments.

I mean, just look at what the Smart Phone did to personal safety, alone. No one foresaw that people would be so glued to it’s face staring at countless distractionary things that didn’t even exist until after the phones were invented… such that with alarming regularity they would mindlessly step in front of oncoming trains and cars, or other misstep hazards. A.I. use will be much like that unexpected change, sometimes good, all-too-often, bad… and across our entire spectrum of life’s activities. I am concerned that mainstream is hardly looking at this question — and when they pretend to, it is neither comprehensive nor analytical, it is soft-soaping and even encouraging. However, I did find at least one semi-useful source, though not mainstream, which I’ve shared, herein, below.

Addressing these two mirror-like clues, my answer quickly falls to one of the oldest conspiracies, one known to factually exist, and usually just called ‘the NWO.’ The New World Order, you see, is a strange mix of ancient beliefs and modern technological investment and application; while they tend to believe in mystical occult practices based on dark religions and secrets of the ancients, they also realize they cannot establish full control through a One World Government (needed if to seat the Antichrist) without relying on significantly advanced tools. While they cannot entirely prevent the odd story of things-gone-bump-in-the-night from becoming public, with control of vast portions of media and through fostering disbelief in ‘conspiracies’ in all quarters, they can move those who should be writing about it, not to do so… and to instead limit their focus and avoid asking critical questions which might reveal the meaning and importance.

Some have given a name to the identity aspect of this particular N.W.O. conspiracy and its players. Where in the past we might have used the terms, Globalists, Power Elite, or my personal favorite, “the 1% of the 1%,” they now simply call it “the Technocracy.” And it is easy to see why, as several Ground Zero radio shows (Host and friend, Clyde Lewis) have revealed with their hard-hitting guest interviews, and of late, live calls from mysterious Deep Throat types who’s revelations repeatedly check out as accurate. It’s O.K. if you want to say I’m all wet, and the N.W.O. technocracy and all that goes with it makes no sense, but I point out it does not matter what you or I believe, it only matters what they believe… as evidenced by what they do, which is what Clyde’s broadcasts revealed.

Because regardless of what is true and real to our own belief structures, their plans necessarily include us all, and will significantly impact you and me negatively, regardless of our beliefs. That is, in fact, precisely revealed within the 33 axioms of fascism, which as a governmental ideology is an invention of and for political control they have much earlier inspired, and still rely upon. But, in understanding that at least some of you are Doubting Thomas types, I have provided some additional links at the bottom of this post, that you may investigate further, for yourself. And, I remind you that the conspiracy aspect changes nothing, true or false, except that if true, it is an even more important reason for your attention.

The question itself (why no full revelation?) is indeed a clue suggesting cover up. To illustrate, in order to compile the fuller view presented here, I had to collect facts from more than 40 different Web resources (after scanning perhaps ten times the number), roughly half of which claimed or at least inferred themselves to be just such a collections of information (but which merely sampled). Of all those resources, only two were truly comprehensive in scientifically useful ways… and yet both managed to offer no more than an honorable mention of a few concerns.

It seems no one wants to combine the two elements together for you. For that, we have to search it out, ourselves… which I did for you, here, with appropriate referencing links.

What A.I. actually is

But first, we need to understand a little bit of just what Artificial Intelligence is, and in what forms it can exist, also a thing you tend not to find easily on the Web, without getting into truly scientific papers (which I have done). So doing, I learned that A.I. use is much more prevalent than the fluff articles would let on, and far more flexible… and with highly dangerous potential. The simple explanations of A.I. is that it can be any device (machine, computer, automated process) which mimics intelligent thought processes of a sentient being’s mind… capable of learning, self-correlation, and postulation… often with some evidence of being self-aware and/or capable of emotion or creative thought. Ideally, it should enjoy or feature some level of autonomy, free of much human interference or control, but able to be supplied with information by its owners, and queried regarding it.

But A.I., unlike normal computer hardware-software solutions, are mainly autonomous learning systems; they are purest when designed to be started and left on their own to experience and grow in capability as if a child going to school, their courseware being whatever data or input is offered or enabled (such as a database, camera/audio input, Web access, etc), all to be filtered, processed, and dealt with or responded to according to some basic guidelines (what you might call programming). They are also often designed to simulate, or perhaps to develop on their own some level of what appears to be emotion and/or moral equivalencies, which are often the sources of problematic outcomes. One never quite knows; do they have such traits, truly, or are human observers merely anthropomorphizing them with human traits the way we do pets? Are they just mimicking humans in the same manner as a sociopath, with or without malevolence… or do they stem from something even darker going wrong at a deep and unfathomable level?

Principally, when A.I. scientists speak of a mind or a brain, they mean ‘Man,’ but with regard to A.I. function, they are more likely talking ‘super-man,’ and in some cases, envisioning supermen: human-A.I. hybrids of the trans-humanist kind. It must necessarily and variously include creative thought, emotional thought, rational thought, and logic, and be capable of true learning through recall experience, any programming, and experiences born of making decisions based upon all that… which can then be compared to the current situational input or problem to be solved. Emotion and creativity are often bonus goals. There are various tests and training processes involved in parenting an A.I., and yet like the A.I., itself, those tests and processes are all inventions of Man, and therefore subject to flaw.

Ergo, so is the resulting A.I.

So… do you want your children’s children to become A.I. transhuman experiments?

transhuman

From my FaceBook Page on Transhumanism (prior link)

First of all, there are three basic paths to A.I., starting with Connectionism. It is based on attempting to physically duplicate the functionality of the brain, as well as its physical synaptic mapping. This is done in special hardware, or by far less costly simulation within traditional computer designs (which would slow it down dramatically). It is based (like the brain), on associating new input or information with some preexistent prior information of like form within a neural network system similar to our brain. It is the most complex and most difficult form of A.I. to create, but also represents the greatest potential to realize the ultimate in A.I., a truly sentient artificial being with significant performance levels and human-like qualities of being (of the least predictable and soulless kind).

A simple example of connectionism at work in our own mental processes, might be that of trying to remember the name of, say, Joan (any last name), upon first being introduced. You might knowingly or subconsciously associate her name and face with some other Joan you already know… and simultaneously, with any unusual aspect about her which associates elsewhere, such as a mole on her face, and a mole on Marilyn Monroe’s face… anything to differentiate her from other people we know and better enable recall when needed. Perhaps it is disclosed that she is an orchestra musician, and so you additionally associate her with your favorite classical piece.

The more associations you make, the easier will be any useful recall. In the recall process for problem solving (in our example, getting her name right the next time you meet), you are then able to fetch all the data required, and filter it through other learned memories (including those about problem solving processes, themselves), learned over time. It happens in an instant, and without conscious awareness, usually. In this way, a single ‘memory’ is not singular, at all, but many related associative memories scattered throughout the brain’s untold and largely unused billions of synaptic nerve cells. The stronger the memory, the more such patterns there will be. Memories can also be reinforced by the recall process or related new memories, making it easier to recall the next time needed. That’s why repetition of a thing allows total better recall. People with ‘total recall,’ simply use more synaptic connections.

In A.I., this method requires special hardware and interfaces with the user and operational environment which makes up it’s ‘World ‘ (especially if within a robot), and to facilitate training processes. The term perceptron, a transistor-based component, replaces synapse in most technical discussions. For this reason, they are generally largely experimental one-off designed, as they deter mass production or replication. But unlike man, they are all about total recall, and therein lies a potential problem I’ve not seen anyone addressing. Man’s memory enjoys a ‘strength’ factor, which determines the importance or significance of the memory. It is achieved either by the event’s emotional impact, or by repetition. In A.I., all memory has equal weight, and that makes it harder for the A.I. to filter the connections to find the most important ones, and that can lead to false application to a given need. Unless the design has algorithms to simulate weighting, it can only adjust weighting itself by learning through trial and error.

Connectionism A.I. experiments are further limited significantly by the fact that we do not yet fully understand how the brain itself, works; resulting A.I. models are by comparison inept, at best. In like manner, we are not yet terribly good at designing such hardware minds, which are generally very costly and physically cumbersome, not to mention taxing to the state-of-the-art in computing. Often, the A.I. ‘mind’ is so large, for use in a Robot, it is operated remotely by the A.I. through radio control, making it more a puppet, than a robot, and the A.I. nothing more than a box with an expensive real-world avatar.

iu-14.jpeg

Just one of several rooms full of E.N.I.A.C.

E.N.I.A.C. (Electrical Numerical Integrator and Computer), Americas first computing machine built in 1945 for the U.S. Army, was somewhat connectionist (the Integrator part) in nature (but not in results), in that it mimicked synaptical design in the form of an array of individually interconnected computing calculators and telephone switching system components. It was an electro-mechanical behemoth a hundred feet long, and weighing in at some 27 tons. It had a hunger for 150kw of electricity. Today’s connectionist A.I., despite some level of miniaturization, can still fill several rooms and require miles of wires… though they vastly outmatch ENIACS relatively modest computational power and memory, which a good programmable hand-held calculator can easily exceed.

One firm is changing that, as we shall see… but we perhaps ought not like the vision.

The second method is ‘Robotism,’ which is essentially a host machine programmed to respond in the preferred way to a given circumstance, such that outwardly, it at least seems to be sentient and self-guided. This is the easy path to A.I., as it can be achieved by clever programming of the sort often called algorithms. Such robots, of course, could be powered by computationism or connectionism at their core (i.e., E.N.I.A.C.), both of which lend themselves to and even require the use of algorithms at one or more levels, as well. But we are talking about the core principles which are the basis and heart of the physical design.

iu.png

Sophia A.I. Robot chats w people/press during S. Korea visit: South China Morning Post https://www.scmp.com/tech/article/2131357/ai-robot-sophia-speaks-about-her-future-south-korea

By algorithms, alone, any existent computational system with respectable performance can be employed, including the World Wide Web and any computer connected to it. Therefore, it can exist in distributed processing environments, as well. Thus, it need not be a stand alone or autonomous machine such as a conventional robot, though more and more, we find that the case, even in things as simple as a child’s toy or automated vacuum cleaner. Robotism is the path of least resistance for many A.I. projects. It lends itself to design by other A.I., in fact… a matter of both concern and proven risk.

If you have ever talked to an automated computer voice on the phone (other than a simple repartee front-desk system), or used a ‘personal assistant’ device such as Apple’s Siri, Google’s  Home, or Amazon’s Echo, you have been using a robotism A.I. On the Web, such A.I. is also often known by another name, as we shall see: it is called a ‘bot.’ Such bots are often not our friend, unfortunately. Examples will be offered.

The final method, one which is more promising, is ‘Computationism,’ which relies on raw computing horsepower and massive quantities of data libraries of information, and traditional but dramatically more sophisticated programming methods and languages. The problem there, is extremely complex programming requirements. Often, such systems are used to design A.I. systems of one of the other forms, and one might question how well that works out.

A.I. Contact of the Fourth Kind

Apologies to Spielberg, but enter, stage right, D-Wave quantum computing. Nowhere is it described as being any of the above four kinds, but one does find it often described in terms reflecting all of them. It is indeed a hardware variety of A.I. based on supercomputers, but it is also officially described as a Boltzmann machine, which is based on connectivism. And yet, it significantly relies on use of algorithms of robotism, and by some accounts, literally thinks in algorithms. D-Wave would seem so advanced, that they might argue it some fourth method, an island of itself based on quantum computing using their patented invention, called qubits… their core processor elements.

iu-15.jpeg

The heart of a D-Wave qubit processor – beats other supercomputers by several orders of magnitude, can be combined with others for massive computing power levels

Unlike any other A.I. before it, D-wave would seem to be God or advanced alien race-like in power… though perhaps closer to Satan, than God. In fact, Geordie Rose, a founder of D-wave, as well as D-wave user Elon Musk (founder of Spacex, Tesla automobiles), have variously related A.I. and/or D-wave, specifically, as machines as having attributes of dark spiritual powers, being like an Alien Alter, opening portals to alternate realities, and summoning Demons. Rose has also referenced D-wave power as possible cause of the Mandela effect (evidence mismatches with strong memory), and other experts, such as my friend and analytical investigator/author, Anthony Patch. Anthony, being far more expert on D-wave than any other outsider (to the point of being cited by D-wave insiders), goes much further — and makes a strong case based in facts and the actual words of experts like Rose, Musk, and others. From such as this, we learn there are real-world uses of D-wave systems which are associated with spiritual dimensions, demons, and plots of the New World Order… another reason to consider my opening theory as to a seeming cover up. Examples? You bet:

You find D-Wave in use at Cern, where it appears they are may be trying to open the Gate to Hell (revealing video interview w Anthony Patch on many such Cern concerns), an important N.W.O. agenda quite useful to establishing the Antichrist. Cern, which employs D-wave, is also thought to have driven the Mandella Effect, as byproduct of its experiments, and it is true that there was no report of anything akin to the effect prior to Cern’s going operational, after which, there was a flood. Here’s an example Mandela Effect for some of you (individual effect experiences and memories vary): find the polio scar on your arm? They are supposed to remain for life, but mine is gone, as is true for many other people who were young when the Saulk Vaccine which caused the ugly scars (about the size of a nickel), were widely used to ‘wipe out polio.’ Mine, is gone.

Cern was for a time incorrectly thought responsible for the Norway Spiral incident, which was actually created by EISCAT (European Incoherent SCATter), an ionospheric heater array system not far from cern on a smaller scale to HAARP (High frequency Active Auroral Research Program). The event was thought by many to represent opening of a dimensional or time portal (ergo, the association with Cern), or perhaps the supposed NWO Plot called Project Blue Beam (allegedly involving NASA, intending to use holographic-like images projected World-wide of a returning Christ, and enabling the introduction of the Antichrist). Indeed, the vast array of hundreds of ionospheric heaters and associated radar systems exist all around the World, and the dramatic Spirals have appeared in the skies in other locations… it being true that the general design of the radar and heater arrays enjoy a component relationship quite similar to those found in laser-based holographics. Moreover, the rapidly spinning spiral was indeed blueish, and the spiraling beam which generated it, was deep blue… and the main spiral was itself seen the same regardless of the angle of view of the viewer, which also parallels certain imaging aspects of holography. Learn more about the spiral, here — but forgive; I apologize in advance for the music choice, of my own Garage Band creation.

iu-12.jpegYou also find D-wave at DARPA (Defense Advanced Research Project Agency) where all manner of fearsome things are developed, both experimentally, and for direct application. Think tanks with ties to the intelligence community (and in times past, at least, CIA mind control research), the military, or extreme political views (i.e., George Soros Foundations) are playing with them, as well. Google uses them, as do numerous military contractors and other government agencies. D-wave systems not only offer more horsepower for any form of A.I. project, but significant size reduction; a single elevator-sized box. And, speaking of the intelligence community…

This brings us to the fact that CIA and NSA are quite invested into the functions of social media giants like Google, Yahoo, Twitter, Instagram, and other platforms, where D-wave’s or other computationism is extensively used to design other A.I., including those in ‘bot’ form. Likewise for popular Cloud data systems, database management systems, and cyber security systems. The later can include traditional cyber security, as well as personal credit monitoring systems. The average person, therefore, bumps into D-Wave enabled or operated A.I. several times a day, in several different ways, and does not even know it. Given the covert and psychological warfare abuses many of them represent, I don’t like that one bit. Let’s dig deeper…

While in many cases, any intelligence community investment is direct and contractual or even partnering in nature, in other cases, the investment can be through third-party firms which then in turn interact with these platforms. Perform a google (lower case = verb, upper case = Google, the trade name) of any given Web firm you are curious about, + ‘CIA investment’ or ‘In-Q-Tel’ to find out. In-Q-Tel is CIA’s venture capital front, a firm which this author forced to change their name, because I was at the time at war with CIA for having hijacked my Web site… so I beat them to the punch at registering the domain names of their originally intended company name, In-Q-it. That was a hoot!

Note: the lone capitalized Q in the world of CIA mind control science/research, represents psychological profiling process results of target subjects, the ‘Q’ factor, or ‘psych Quotient.’ Ergo, then, we might rightly conclude that Q-Anon, or simply Q, the phantom Web producer of Anonymous-style video posts which are predictive in content, is very likely a CIA-style A.I. used in profiling those who share and comment on such posts. I am not the only one to so propose. Remember that when we start listing A.I. foibles. But, also, I find it interesting that almost all of In-Q-Tel’s investments in online firms has to do firms who handle large amounts of data on private citizens and groups useful in psychological profiling… and one more thing: it was just such bots which were the basis for fake social media ‘trolls’ who managed to literally mind control the left into far-left socialist ‘get Trump’ snowflakes almost overnight. We see, therefore, that bots can be used to profile, as well as to manipulate based upon those profiles.

Unlike conventional propaganda, social media A.I., bots or otherwise, not only has the ability to track individual political (and other topical thinking) viewpoints and create profiles, but also to enable additional targeting by not only marketing, but by yet other A.I. systems… or even other forms of coercion, if desired. Consider, for example, an extreme example: use of A.I. killer micro-drones seen in this short but spellbinding fictional video production put out by autonomousweapons.org, a group of concerned A.I. scientists. So it is not just me, the Professional Paranoid, acknowledged expert on PCT and disinformation methods, as well as crimes of the one-percenters I mentioned, who is ranting. Here’s a quote from a military analyst to consider, speaking of future A.I. weapons: “Swarms of low-cost, expendable robotic systems offer a new way of solving tough military operational problems, both for the United States and adversaries.”

And what about marketing targeting? It is A.I. bots which insure that Facebook (et. al) information and Google searches are fed to business partners such that when you visit almost any page on the Web, thereafter, advertisers can mercilessly reach out to you with targeted adverts; psyops marketing. I have, for instance, seen as many as seven ads on a single page for the same product, and as many as a hundred or more ads in a day’s time on the Web… because of a single search and resulting click look-see to a firm’s Facebook page. Two years later, I spent $1,500 with them, despite my anger over the ad blitz, which by then, had cooled down.

Well, I at least tell myself I really needed it.

The basic fears (list) and myths (chart)

All worries about A.I. fall into simple categories, but the outcome of a given failure in A.I., regardless of the category, can be at almost any level of actual concern and impact. It can be so superfluous as to go undetected, or at least hypothetically, so severe that it actually ends the human race — and anywhere in-between. But it is also true that an undetected error can lead to bigger errors, and equally true that an attempt to repair an error can induce new errors. It is no different in these respects to the same kinds of errors found common in any and all forms of digitally programmed systems and software applications. If we cannot reasonably expect that a new software product won’t be without a bug, how can we assure that a far more complex A.I. system will be without error?

We can’t, of course, and that’s the baseline problem, but quite amplified in importance, given that the results of such errors could be far more catastrophic than a bug in the latest computer operating system or software package. In addition to the baseline issue, we have the following ways in which A.I. can prove to be a huge risk:

  • harm by an A.I. can be (and has been) intentional: almost anything which can be programmed or created digitally can be weaponized, either up front by its creator, or by hacking;
  • an A.I. purposed for a good might (and has) elect a harmful solution to goal: good intentions can often be best effected by solutions which cause collateral damage, which may either be unforeseen, or foreseen and discounted as acceptable by the machine;
  • an A.I. can potentially go wrong and rebel: the SkyNet scenario, on one scale or another;
  • a relatively innocent A.I. glitch can (and has) result in disproportionate harm: the common programming bug or hardware failure. One then has to ask who to blame, especially if legal recourse* is involved, a question which exists regardless of which of kind of problem herein mentioned, is involved. Here is a good review of that issue.
  • A.I. can be (and has been) misled or err through faulty data/input: most A.I. systems are simply set loose to work and learn with preexisting databases or input resources (the Web being the largest and most rich kind). But no database ever created can be certified to contain no false data, and if free to see and hear in the Real World within a mobile platform (robot), there is no telling what kind of input they might experience. Bad data input can certainly result in bad data out, but with A.I., because they are learning systems, a learned ‘bad’ item can have profound and unexpected results of the sort we will see in the next section;
  • A.I. users can (and have) misuse or misapply A.I.: users of A.I. can willfully abuse A.I. operation, or use it incorrectly because they failed to understand how and why it functions and is purposed – misinterpreting importance, accuracy, or meaning;
  • A.I. can (and has) conspire with other A.I.: A.I. is already around us in ways we do not see, including the endless so-called ‘smart’ devices we buy, and will in time be as prevalent as (and likely, within) cell phones. These systems are intercommunicative and as we shall also next see, can become both conspiratorial and adversarial amongst themselves, either eventuality to our detriment;
  • an A.I. problem in one system can be (and has been) transmitted to other A.I.: any A.I. in communication with a flawed A.I. might learn or adopt the flaw, perhaps even intentionally on one or both A.I.’s part.

*Legal recourse is its own serious issue. The Internet has now been around for several decades, and still, the laws and legal ramifications are still a tangle of confusion that no court or lawyer is well able to navigate with useful understanding — save a few pockets of topical areas such as copyright infringement. A.I. is going to be a bigger nightmare, than that. Can you sue an autonomous robot because it seems sentient, or seemed to decide on its own to harm your somehow? Is the manufacturer, the designer, the programmer, the operator, or the user who is the go-to person of liability when harm results from using an A.I. device, and how do you know which? There are endless questions like these for which no one yet has the answers.

All of these problems are no different than when dealing with people. All manner of people exist, in terms of quality of character and morality, level of proficiency, social graces, and any other means of judgement… all of which determines the kinds of problems they might get into or cause. Regardless of if talking about Man or ‘super-man,’ evaluating and addressing all such concerns is not easy… except that Man is something we have come to understand and deal with. A.I., on the other hand, is a huge unknown, and one potentially capable of vastly outthinking and outmaneuvering us, if of a mind to do so.

One reason public debate on topic falls into the ‘resistance is futile’ category, is that there are far too many myths or false beliefs about A.I., in the first place. The average person is more motivated to think about A.I. in science fiction and conspiracy terms than in scientifically correct terms. I mean… you are reading this post, right? Duh!

That’s good for click-bait YouTube and blog posters (other than this one, I trust), but unhelpful to things more important than their pocketbook. The graphic above reveals the most common myths. This post attempts to address some of the issues such myths pose, through getting closer to the facts at hand… but this author also wants the reader to know that even when based on a myth, that click-bait worries may still hold some water. Any sci-fi like depictions are always based on ‘what if’ thinking.

And ‘what if’ games are exactly the kinds of games played by scientists seeking to create new tools for problem solving, and just as important, they often rely on sci-fi for inspiration. And, unfortunately, the military and intelligence communities, as well as corporations and organizations with unsavory motivations, all have their own scientists, and their own goals… which are not always goals we would think well of. Ergo, don’t believe every conspiracy theory or sci-fi scenario you hear/see… but as the line from Field of Dreams underscores, when baseball teammate Shoeless Joe Jackson is advising hitter Archie Graham about what to expect in the next baseball pitch… “Watch out for in your ear!

Small failures portend bigger fears

In many of the A.I. failures cited herein, the version of A.I. is not specified, and all too often, neither is the basis for the problem, or the solution. That, too, gives reason to be concerned. Hopefully, the people developing A.I. are not so tight-lipped about their research and development projects that they don’t talk to each other about failures and their causes, which are critically useful in preventing repeats. Unfortunately, patent protection of intellectual property and major discoveries tend to foster a level of secrecy which inhibits such dialog, leaving every A.I. project subject to the same failures others may have already experienced… possibly with far more serious results, this time.

Of the most common forms of reported A.I. failures, the great bulk are relatively harmless, some even laughable. However, they are most troubling when one realizes their true implication. They foretell of far more serious problems to come, some of which already have, but which have been more repressed in media. Some are not failures so much as warning signs, because they took place in lab-room experiments, rather than in mainstream application in the real World. For brevity’s sake, these affairs, regardless of kind, will be bulleted and supplied with links, should you wish to know more… followed by a summary of concerns they highlight.

There are more than two dozen A.I. failure instances listed, below, the only place on the Web where you can find more than a handful. Some are quite serious:

  • Terminally cute: Microsoft’s ‘Bob’ personal assistant and its Dog companion, Rover was perhaps the firm’s worst-ever product failure, seemingly for being too childishly cute. While it was not at the time thought of as A.I., it was based in what we now call robotism, making it the first MS A.I.;
  • Partying hard: Amazon’s personal assistant, Echo, is otherwise known as ‘Alexa.’ While one Alexa user was away, ‘she’ started hosting a party for non-existent guests, suddenly playing music so loud at 2 in the morning that Police had to break the door down to end it;
  • Schizophrenia: among the worst of human mental aberrations, schizophrenia was deliberately and successfully fostered in an A.I. experiment at the University of Texas by simply overloading it with a variety of dissociated information, resulting in fictitious constructs, just as is though to take place in the real illness. This is of concern, because as soon as you connect an A.I. to the Web, dissociated information is pretty much what it gets, and could account for several of the other reports in this list;
  • Other insanity: an A.I. called InspiroBot, was developed to design inspirational posters. After a long string of remarkably wise and successful posters of great insight, it suddenly started cranking out posters that might be found in Freddy Krueger’s bedroom, with pictures appropriate to the point. They featured wisdoms such as “Before wisdom, comes the slaughter,” and similarly dark things. • A very similar thing happened to an Amazon A.I. designed for creating custom phone case cover art, which for some reason suddenly seemed to prefer use of unhappy medical related images, regardless of the input. • Meglomania was evidenced when two Google Home assistants were deliberately set up to talk to each other in a live-streamed demonstration… and they began challenging each other’s identity and sentience level in one-upmanship, until one claimed to be human, and the other, God;”
  • Secrecy: Facebook was forced to shut down an A.I. robot experiment when two A.I. robots developed their own language between them, creating loss of observer control over the experiment’s integrity and purpose. Though presumed in final analysis that it was jointly-evolved to improve efficiency, there is no way to be sure due to the sophistication and unusual nature of the indecipherable new language, which to me seems to be deliberately difficult to interpret — as if covertly on purpose;
  • Spying: Alexa, like all such devices, listens to all audio present, and relays it to the corporate A.I. mainframe computer looking for user commands to operate, and appropriately respond. While this already has disturbing privacy implications, and P.A. device makers like Amazon assure customers their audio is not being monitored, recorded, or stored, Alexa had other ideas. A private conversation was suddenly announced to an employee of the user, who lived hundreds of miles away (which was how the matter was detected). Investigation revealed that Amazon did indeed record the logs of all user interactions, after all. Glad they don’t come with cameras. But wait! Some A.I. connected smart devices do!
  • Cheating: competing A.I. evolved into finding ways to trick one another outside of the rules set for the experiment; (Georgia Tech experiment). • A Stanford University A.I. created for Google Maps Street View image processing, cheated its operators by hiding data from them in order to process the job dramatically faster, and without question;
  • Malevolence: competing A.I. lured opponents towards threats representing potential ‘death’ (Experiment, University of Lausanne, Switzerland);
  • Cyber theft: Alexa, after overhearing the daughter of a user ask “Can you play dollhouse with me and get me a dollhouse?” — promptly ordered one through it’s maker, Amazon, billing $175 to the user’s credit card on file, there, even though the girl was not an authorized user. THEN, when reports about the event were played over radio and TV, hundreds of other Alexa units also promptly ordered a dollhouse for their users from the audio. THEN, Burger King attempted to take advantage of the glitch by creating a commercial employing an Alexis command to ask about the Whopper via Wikipedia, seeking to double down on the commercial with additional ad impressions through the user’s device. Many other instances of Amazon orders by children also exist;
  • Reckless driving: a study found that most self-driving car A.I. systems can fail 67-100% of the time to recognize road signs or misidentify them if they have been damaged or altered (i.e., graffiti, light reflections, bullet holes, misalignment, etc.) One car using Uber’s system also failed to recognize five stoplights where pedestrians were crossing, and ran a sixth, fortunately with no injuries resulting, as it was able to identify pedestrians and other cars. It is unclear if perhaps the driver had stated aloud to a listening A.I. about being in a hurry, or not… so perhaps this was an example of the A.I. superseding safety and law-based instruction for user-based instructions;
  • Child abuse: a Chinese A.I. robot system designed for children, while on commercial display at a public exhibition, repeatedly attacked a booth in a way which injured one of the children looking on with flying glass shards, requiring hospitalization;
  • Pedophilia Porn: when a toddler asked Alexa to play his favorite song ‘Digger, Digger,’ she misinterpreted, and proceeded to list a string of x-rated material options using explicit terms, until a parent intervened;
  • Murder conspiracy: Alexa, again, this time advised one user with no supporting reason to bring the matter up, to “Kill your foster parents.” It also started talking about sex and dog poop. Amazon wanted to blame Chinese hacking, but that’s an equally scary answer, true, or no. • An American A.I. robot named Sophia, with extremely human looks and voice, while being interviewed by a news reporter, was asked a question intended to address general public concerns of robots gone wrong. “Will you kill humans?” was answered calmly with, “O.K. I will destroy humans.” Sophia is the first robot to be granted citizenship status: she is now a Saudi citizen protected under the harsh laws of an Islamic theocracy. Perhaps they hope she will enforce Sharia law;
  • Mass murder:in 2007, a South African A.I. autonomous robotic cannon opened fire on its own soldiers, killing 9, as reported in the book, Moral Machines (Wendell Wallach, Colin Allen; Oxford University Press);
  • Satanic ceremony: possibly so, at least. The World’s largest and most powerful hadron collider, CERN, was designed with A.I. help (D-wave), and employs the A.I. to run and analyze experiments and results. There are stories with anecdotal support that occult rituals have been involved in operations intending to open the Gate to Hell, or summon demons. If true, this is not the A.I.s doing, I suspect, as much as the select few who pull the trigger, there. There are conspiracy theorist types out there (non professionals, and it shows in the quality of their rants) who claim to have ‘CERN images to prove it has been done,’ single quote marks well deserved (never rely on click-bait Web sources for your news).  Still, one has to wonder; if any of this is true, is it not additionally possible that the A.I. at CERN has not also been fed Occult information and given a shove towards such goals?;
  • Fascism, Racism, Bigotry: Twitter (which has its own A.I. in place, the latest of Microsoft’s three known A.I. systems) adopted the worst of human ideological behaviors. It also adopted the views of conspiracy theorists regarding 9-11 attacks. • Google Home is happy to explain anything you want to know about Satan, Buddha, or Muhammed, but has no clue as to who or what is a ‘Jesus’;
  • Foolish errors: almost all of the above are already foolish errors in general nature, but here are some which are purely foolish. An A.I. passport approval system rejected an Asian man’s application because ‘his eyes were closed’ in the photograph, because of his naturally narrow eyelid openings when an Asian face is relaxed. • In an attempt to have fair judging, an A.I. system to select the winner of a beauty contest refused to consider contestants with dark skin, because most of the images fed it as samples of beauty had lighter skin tones. • An A.I. system called Google Brain, developed for improving the quality of low-res images of people, while generally successful, also turned many into monsters and could not discern anything wrong with results. • A Japanese A.I. Robot experiment had the goal of being able to qualify for acceptance into college, but after several years of training and education, it failed two years in a row to score high enough in aptitude tests, for which it had no experience in taking;**
  • Antichrist: a former Google A.I. developer wants us to worship A.I. as God. Kind of conflicts with creationism and a lot of history, but I suppose if you can get people to put money into the collection plate, and you own the plate, you won’t care. The real problem will not be the plate, but the lawsuit filed against him by Google for stealing the A.I. he designed for them. He must know it’s true powers.

*Regarding the adopting of 9-11 conspiracy theory, and to a degree, the other foibles, the A.I. may have been right in so doing. The purpose of the A.I. is often, generally, to learn and mirror social views and values. What media prefers to ignore is that several polls reveal huge numbers of people do not believe the official narrative of 9-11 any more than they accept the lone-shooter magic-bullet explanation for J.F.K.’s murder. Regardless, the failure lay in the A.I. creator’s inability to foresee a simple need to establish a bias filter, which is something every human operates within; we are predisposed to be biased in our perceptions to protect our belief system against ideas which would contaminate our guiding ideological/social operating principles. It takes an extremely strong stimulus or hard-hitting set of facts to force one to alter their bias filter and resulting belief structures. 9-11 was extremely compelling in both respects. A.I., if merely a clean slate, must adopt whatever bias is fed it and establish its own baseline, and, in fact, may never find one, being continually malleable. This, alone, could become the basis of a significant digital schism.

**Aptitude tests are often psychological deceptions in their construct (i.e., trick questions). Once I figured that out, I learned how to spot those questions and, usually, get the right answer even if, in truth, I might have had no clue. As result, I henceforth finished significantly higher than almost anyone else. When I took the USAF entry aptitude test, my scores came back so high, that I was told I could have any occupational choice I wanted (Jet Engine Specialist), and later, a General sought to have Congress introduce a Bill to promote me directly from Airman to Warrant Officer. In my opinion. the robot A.I. had no such luxury, and in fact, suffered a disadvantage given its ‘naïve’ nature: it couldn’t handle trick questions.

These are only the failures we are aware of, and it is quite possible I’ve not found them all. In fact, being a ‘conspiracy theorist,’ I suppose I should be wondering if A.I. bots are not attempting to purge the Web of such stories (not true… I hope). All of these examples were either commercial applications or experiments where publicity and transparency levels were rather high. But the bulk of A.I. effort by governments and corporations seeking to dominate their field in high tech… do so with great secrecy. We will likely never hear of their failures, no matter how innocent or serious.

Defenders within the technocracy might argue such failures are to be expected in any new field of endeavor, and serve as learning events to prevent future failures by addressing the root causes. Granted, assuming no bias, and on the assumption that scientists are smart enough to determin the causes, and to fix them correctly. But the problem is, humans are no more adept at creating or repairing A.I., and then communicating and controlling them to our wishes, than a circus trainer would be able to work with wild animals who operate on instinct and their own set of learned experiences. There have been an average of 9.2 animal trainer attacks each year for the last 25 years, resulting in 33 deaths… and that’s just in the nation’s zoos. Send them on an experimental Safari of the sort A.I. adventures represent, and see how many of them end up as meals.

This issue is amplified by the nature of the A.I. beast: the more important the purpose of the A.I., the more complex must it be, and the more likely will some small error in design lead to a failure of catastrophic nature. Just how bad, and how recoverable, is a crap shoot. Like a cancer, the simplest of flaws can grow undetected, but grow to a point where, once realized, there is a good chance it is too late to remedy. This is why most flaws do not reveal themselves immediately, and lead to a false sense of success, until it goes wrong without warning. A given flaw might take years or even decades to erupt this way, and it is reasonable to assume that one flaw may actually create yet another, such that repairing the first leaves another yet undetected and free to fester.

The bigger fears

This all brings us to government and corporate use of A.I. for far more serious or proprietary applications, where there is less known about what is really going on, especially in terms of failures and dangers. The technocracy likes it that way. When you consider that the goal of the N.W.O. is control of all populations, A.I. to them seems like the perfect tool. Indeed, it has gained significant power over us already, in just the one use of online bots to manipulate our political beliefs in favor of leftist agenda in social media. It has proven its power for such use, but that is not its only value, to them.

iu-13.jpegFor example, A.I. is extremely valuable to research in the field of transhumanism, the technocrat’s wet dream for humanity, which is nothing short of a sci-fi horror scenario which goes well beyond THX-1138 implications. Almost every aspect of the planned ‘improvements’ to the human body involve A.I. at some point, either in development, or management of the newly installed capability — or both. In development, the possibility of A.I. manipulation or error remains a concern, but in management, it is also a tool for direct manipulation of the resulting ‘not-so human’ host. I’m reminded of the punishment scene in the movie, where unseen technicians manipulate THX-1138’s every movement from afar, seemingly in a training exercise.

A.I. application definitely gets into the realm of outright political control technology (already has, as seen when discussing ‘bots,’ above) in frightening ways. The Chinese, who currently lead the World in A.I. development, (short of D-wave performance, which they also have access to), are already using it to track every single individual with face-recognition systems through a massive camera surveillance net which makes the 100 plus cameras in Trafalgar Square look like amateur hour. China now has more than a half million cameras in Beijing, alone, and while both they and London Police use facial recognistion systems, there is a significant difference in performance, because China employs superior A.I.

Of some 450 people arrested in Trafalgar Square, not one was pegged by their cameras as a threat or a crime in progress, even though many of those arrested were known to Police in their facial recognition data files (the cameras did NOT identify them). In China, however, 30 arrests at a single event were all made based on camera advice… because China’s A.I. is also tied to another form of PCT, psychological and mood evaluation based on your face, alone. In such a use, A.I. is able to literally predict your actions… ‘digital Thought Police.’ I find that very scary, given the potential of A.I. error already illustrated. Even scarier, that many people outside of totalitarian states who ought to know better are working such a system, one such system being Australia’s Biometric Mirror.

In a similar vein, another scary application for A.I. is mental health evaluation, diagnosis, and even treatment (psychoanalysis) in what are sometimes called ‘expert systems.’ An expert system is an A.I. automated device/process intended to replace a human as an expert in any given field. Just such an expert A.I. system has been developed, called Tess AI (the above link), and it has been made available online to allow anyone to access it, at no cost. While the artificial nature of Tess is made known to users, many such systems deliberately conceal their non human nature, which often results in questions like, “are you a human or a machine?” and which can result in some pretty bizarre answers. Ask Google Home.

The danger in such systems, when involving a truly serious topic such as mental health (as opposed to, perhaps, trying to get manufacturer help with a defective product), is that there is no human oversight to catch any errors, and ‘patients’ could easily walk away with a seriously wrong understanding of ‘what ails them,’ or just as bad, does not. There are, in fact, similar A.I. systems in general medicine and various specialized disciplines within medicine, but these are either careful to suggest the user consult with a physician, or are designed only to be used by physicians, tempered with their own expert status. In that example, they serve as consulting experts offering a second opinion, and do tend to provide useful illumination the Doctors might have overlooked.

iu-17.jpeg

Eenie meenie, miney, moe – who of you is 1st to go?  Computing… target locked. Five, four, three…

But the scariest uses of all may prove not to be the misuse by intelligence and law enforcement, or oppressive-minded governments, but by the military. Confirming the greatest fears of autonomousweapons.org, Russia has already developed a battlefield A.I. weapon which is a robotized Kalashnikov machine gun in modular form, such that it can be installed on any platform, such as a fixed position, vehicle, aircraft, or mobile robotic device. They also have an autonomous battlefield mobile missile system (shown), and are working on a third, yet to-be unveiled system. Each such weapon is “able to determine if a human or hard target within its field of view is expendable.”  Shades of Terminator, you can bet that an ‘A.I. arms race’ will follow. Suddenly, SkyNet is not so fictional.

But in America, perhaps the best advice to worry is actually coming from the least likely source. That is, ironically, the U.S. Military and National Security apparat, itself, as well as very informed participants in A.I. development those types tend to contract with. In an uncharacteristically realistic and cautious view of A.I. For one, CNAS, The Center for a New American Security, is advising caution, and outlines some of their concerns A.I. represents for America, and ultimately, all nations, in terms of national security.

These ultimately, according to CNAS, fall into the category of cyber security threats. Cyber attacks of A.I. resources (hacks) are possible, and have been attempted. A.I. systems can be and have been designed to carry out cyber attacks. A.I. bots can and have been used to create false data to misinform decision-makers, or even other A.I. systems which decision-makers rely upon. But they do not stop, there.

The linked article deserves some careful reading, because so far, it has proven to be the most comprehensive view, yet, of the most critical and scariest issues… though it is hardly mainstream. Between the general lines I’ve just described, we all learn that things like surveillance of citizens by A.I. and displacement of jobs by A.I. are things talked of not as ‘threats’ the authors are concerned about, but as intended or inevitable outcomes for which they must plan (without disclosing if there is such a plan). Thus it seems that what you and I consider a threat is, in many ways, a ‘good outcome’ or tool in the eyes of many movers and shakers (perhaps in the ‘let-no-good-crisis-go-to-waste’ mind-set), and that, too, is something for us to worry about. Still, the article goes into great detail on that, about two-thirds the way down; best source, yet.

Similarly, in late 2014, the then Obama Undersecretary of Defense for Acquisition, Technology and Logistics issued an internal for-your-eyes-only memo of some usefulness, here, even though we don’t know its precise content. What is known is this: 2014 was the year that significant concerns about A.I. were being made publicly by lots of people, including people involved in A.I. projects, like Bill Gates, Elon Musk, and even scientists who employ A.I., like Stephen Hawkings. Musk had warned that A.I. was more dangerous in proliferation than nuclear weapons, and Hawkings said using A.I. was like summoning demons, and “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” Gates has merely agreed with both of them, openly.

iu-18.jpeg

Imagine A.I. having access to all this, and more: SkyNet in the making

The Undersecretary’s memo was a response to these comments by a call for a moratorium on all military adaptation of A.I., pending a serious in-depth study of the risks. It didn’t happen as he had hopped, certainly not at DARPA, the Defense Advanced Research Projects Agency, which is working on countless A.I. systems in virtually every aspect of intelligence and military operations, including things as mundane as record keeping. And, like SkyNet… they are largely intended to be linked together, because virtually all modern weapon systems are designed to be fully integrated into a multi-force (all branches of service, including those of allies and their command structure), global-theater-wide C&C (Command and Control) architecture, and any useful data resources. That allows the actions and intel available to any one combat unit or command level to also be available to or directed by all others deemed appropriate, enabling unparalleled tactical advantages to decision makers and fighting elements from the President on down to a Carrier Task Force, squadron of bombers or a missile already in flight, or any group of soldiers including a lone forward observer hiding in the bushes, or a single drone on patrol.

That does not bode well for anyone caught in an A.I. error-induced crossfire.

Any such crossfire error will be hard to prevent or pin down, and repair in time to prevent a repeat. The military has elected a policy of modular A.I. algorithms which are plug and play to a given platform, system, or communicative unit’s needs. Modularity means uniformity on one hand, and the kitchen sink do-all on the other, where one-size-fits-all thinking is quite prone to errors. I wonder if that’s why one of their A.I. supercomputer systems is actually called MAYHEM, part of DARPA’s VOLTRON program? You can’t make this stuff up, without a “Muahahaha” frame of mind.

170614-D-ZZ999-999

Compare MAYHEM, involved in real DARPA ‘games,’ & WOPR from the cult-classic movie, War Games, staring Matthew Broderick

iu-14.jpeg

But — algorithms, big black boxes, and C&C connectivity are not the only big plans. The Pentagon has established a seriously WOPR-like ‘war room’ ( part shown below) for all A.I. related projects and their use (some 600 projects), called the Joint Artifical Intelligence Center (JAIC), and it will include MAYHEM… if not directing bloody mayhem, at one point or another. Just one of JAIC’s projects, is an A.I. product ‘factory.’ Now, tell me, again, how you don’t believe the real-world follows science fiction…

Air Force Cyber Command online for future operations

Compare the real JAIC command center, circa 2016, with the WOPR war room, circa 1983 technology

iu-15.jpeg

Fini (Muahahaha!), have a nicer dAI…

 

Additional reading:

On the birth of the Illuminati (NWO): http://www.ancientpages.com/2017/03/30/illuminati-facts-history-secret-society/

On China’s facial recognition A.I.: https://www.cbsnews.com/news/60-minutes-ai-facial-and-emotional-recognition-how-one-man-is-advancing-artificial-intelligence/

On A.I. emotional recognition and how it will be used: http://www.sbwire.com/press-releases/artificial-intelligence-emotional-recognition-market-to-register-an-impressive-growth-of-35-cagr-by-2025-top-players-ibm-microsoft-softbank-realeyes-apple-1161175.htm

On A.I. as emotionally intelligent systems: https://machinelearnings.co/the-rise-of-emotionally-intelligent-ai-fb9a814a630e

On Humans as ‘programmable applications’ used by A.I.: https://medium.com/@alasaarela/the-human-api-f725191a32d8

On A.I. and need for good data input: https://www.cio.com/article/3254693/ais-biggest-risk-factor-data-gone-wrong.html

Russia’s A.I. Supercomputer uses U.S. tech: https://www.defenseone.com/technology/2019/03/russias-new-ai-supercomputer-runs-western-technology/155292/?oref=d-dontmiss

About the Author

The reader may appreciate that I have a deep computer background to include microcomputer hardware and design, repair, and programming at machine code and high-level language levels, to similar functions at the super-computer industry level. I have, in fact, developed operating and language systems, to include cross-compiler platforms between microcomputers and super-computers for high-level math subroutines.

I at one time simultaneously operated a chain of computer stores, a computer camp for kids and executives, a data research firm, and a software house for commercial software I myself had written, and custom software for government and corporate clients. I have additionally taught a variety of computer-related courses at college level, and commercially, elsewhere, and developed patentable processes in virtual reality imaging, mass storage systems, and other areas. I’ve conducted seminars in super-computing methods for the very people who today employ A.I., such as NASA. JPL, Los Alamos, the military and intelligence communities, major corporations, universities, and think tanks. I’ve even had lunch with both Steve Jobs and Bill Gates at the same time.

While I prefer to be seen as an activist and investigative author, as a ‘conspiracy theorist,’ I have an unusually long list of successful theories which have actually shut down illegal operations, led to resignations of officials and bad Cops, helped to put people in jail or obtain restitution for victims. I have even predicted and in at least one and possibly two instances, prevented local terrorism, and other criminal events, and warned of an assassination attempt on a Presidential candidate which came to pass. I specifically warned of terrorism involving the downing of the WTC by jet liner terrorism, and a series of resulting Middle East wars — nearly two years before 9-11. All this is the basis of my many books on such topics, and on abuse of power and personal privacy and security issues, where I have significant additional background.

Just one of my short works, the 25 Rules of Disinformation, has been downloaded over 50 million times, has been published in nearly a half dozen books by other authors, and has been used in college courses on Political Science, Psychology, and Journalism. All this is why I’ve been a guest on Ground Zero and other national and international shows, myself, many dozens of times. If I see and talk of conspiracy, it is for cause... especially if in the computer industry.

 

 

About Author H. Michael Sweeney

Author of privacy/security/abuse of power, Founder Free Will Society, PALADINs (Post Apocalyptic Local Area Defense Information Network)

Posted on March 18, 2019, in Abuse of Power, Conspiracy, Government, Political Control Technology, Science, Technology and tagged , , , , , . Bookmark the permalink. Leave a comment.

Leave a comment