Be afraid, be very afraid....


BuilderBill

Recommended Posts

Caught an interview with this author(James Barrat) on TWIT's Triangulation podcast, it piqued my curiosity so I used an Audible credit for his book. I'm about halfway in and he makes a compelling argument that we may not be around much longer if we keep fooling with stuff we don't understand and can't control.

 

From Amazon's book description:

"Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the “smart” in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.

In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.

Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?"

 

Some great minds argue the other side: that we'll never be able to re-create human intelligence in a machine. But there were equally educated people who said a machine would never be able to fly, either. And what if we don't do it, maybe some of the high-powered computer algorithms wall Street is cranking out by the dozens just happen to get together and decide to get smart all on their own?

 

Throw nanotech into the mix and the likelihood of a "technological singularity" seems even more imminent. And who says we necessarily have to come out on the other side?

 

Skynet? Really!

 

Just food for thought.....

Link to comment
Share on other sites

This is why I grabbed a tent, rented a car, loaded my fam and saw: Rocky Mountain National Park, Mesa Verde National Park, Death Valley National Park, Sequoia National Park, Valley of the Kings, Valley of Fire State Park, and Red Rock State Park. 5,690 miles later and my non-tech is fully sated and my defenses fortified. I'd recommend it to anyone. So many places where a cell phone is only good for photos and music.

Link to comment
Share on other sites

The "What if? ..." essay was cute but totally misses the point.  

 

I'm more concerned with the algorithms that attempt to predetermine the way we think and do everything for us.  The examples of programming our purchases for us and choosing our dates that were given in the OP are great examples of the kinds of concerns I have.  The machines won't need to annihilate us, they'll manipulate us into doing it to ourselves.

  • Like 1
Link to comment
Share on other sites

   AI is coming, like it or not!  The problem is going to be control! But, now the question is: Can anyone control what has the possibility of being way more informed and intelligent than us? With nano tech on the rise, an intelligent machine would have the capability to to build it's own infrastructure, and that is something to be concerned about. As for bugs in the code, who determines what a bug is? A human, or the machine? If the machine has the ability to over ride human input, It might be time to slip your head between your legs, and kiss our collective butts goodbye!

Link to comment
Share on other sites

There is no such thing as 'artificial intelligence'. Humans are not capable of creating true intelligence in a machine, only somewhat impressive simulations. The thing that scares me is what other humans do with all the information the machines are storing.

 

We may be too dumb to create true AGI(Artificial General Intelligence, i.e. human-equivalent) but the problem is we're smart enough to create intelligent systems that can learn on their own and improve themselves.

 

There are billions of $ being spent by governments, hedge funds,  and almost any facet of big business to back research by some of the smartest people on the planet that would probably disagree with your opinion.

 

Hey, maybe we can, maybe we can't. But it's the Wild West out there in AI land and the consequences of a screw-up would be catastrophic. And there's no governing body, no mutually agreed upon safety standards committee(unlike genetic research, another advanced technology with the potential for wiping us off the face of the Earth) and most of the real money is being spent by Wall Street and defense contractors for work done in secret.

 

And there is no plug to pull: networked computers control our power grid, our communications, our water plants, even our traffic lights not to mention the gear in the ER they'll use to keep you alive after an accident or the delivery of the food you eat every day to a place you can buy it. If an AGI spontaneously appears it'll be a networked entity, it won't be limited to a single computer cluster. Then whaddya gonna do? Shut down every computer in the world until you can clean every one individually? How long will that take when the antivirus updates have to be delivered by hand(delivery services run on computers too). And how many billion people die from starvation or exposure or disease in the meantime?

 

I've come to realize that this ain't no joke, we're playing with technology that has the potential to do far more harm than the discovery of nuclear fission and we all know where that led. And everybody's racing Hell-bent to it with no mention of safeguards.

 

Helluva way to run a railroad if you ask me.

 

Nice knowing ya. :(

Link to comment
Share on other sites

Oh, I agree that all those dangers are very real, I just believe it will be a human finger ( so to speak ), that pulls the trigger. As an automation engineer, I have a very strong appreciation of just how difficult it is to write code that can anticipate every situation it must deal with, or to respond in some reasonable way to situations that were never anticipated. I'm not saying that someone out there can't write a program that is smarter than ME, I just doubt very seriously that it would be smarter than THEM.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Who's Online   0 Members, 0 Anonymous, 39 Guests (See full list)

    • There are no registered users currently online
  • Forum Statistics

    31.2k
    Total Topics
    422.1k
    Total Posts
  • Member Statistics

    23,780
    Total Members
    3,644
    Most Online
    sadep44498
    Newest Member
    sadep44498
    Joined