There is no way that some American based effort to try and slow down AI will make any difference to the work being done in China, Israel, India and many other places. This has to be managed and government is NOT the way to do that. It is out and real and we have to work to stay ahead. Whining and trying to stop something that is coming from many places is not useful.
I wish humans weren't so hubristic and foolish. We are destined to be guided by the hands of the least thoughtful and caring. And so though I would prefer we carry on with great caution and with a sane, level head about the benefitsand threats - we won't.
If we could go back in time and stop the development of thermonuclear weapons, would we? I would. We started with a bomb; developed it into competing arsenals; and suddenly we had a genuine existential threat.
There are lots of factors threatening humanity - including AI - but as long as politicians continue to run on cycles where they can't deal with obvious dangers (like climate change) after decades, while information, and disinformation, spreads in minutes. I don't think shutting things down for 6 months is going to make the slightest difference.
Shut it down. The thing created is only as great as the creator and given the fact that it is created by humans with all their failings it will result in nothing but evil.
I'm skeptical that we could coordinate globally to regulate AI when we can't seem to do the same with another existential threat caused by easy access to technology: climate change. The only path I see through either of these dangerous territories is to promote philosophies that encourage every human toward self-actualization, leading each of us (most importantly our leaders) to prioritize our species (especially our most vulnerable people) over specific people or countries. I'm sure that sounds naive, and it is, but I'm not sure what else to do. Also, echoing Hezekiah Holland's comment, AI _could_ be harnessed to _resolve_ existential threats like climate change.
I’m excited by the potential of AI. However, the MIT 70s LtG model continues to offer a foundation for considering that many factors threaten the earth and humanity. Focusing on AI as the current source of demise is, at best, a distraction and, at worse, attempting to slow its growth denies us the opportunity to explore the technologically enabled paths for overcoming other threats. Trying to stop AI is a naive and foolish idea. Equally silly is not exploring how to harness its potential and contain its risks.
License it’s use like radioactive material.
Provide for the good, understand the the potential of a doomsday event if misused.
There is no way that some American based effort to try and slow down AI will make any difference to the work being done in China, Israel, India and many other places. This has to be managed and government is NOT the way to do that. It is out and real and we have to work to stay ahead. Whining and trying to stop something that is coming from many places is not useful.
I wish humans weren't so hubristic and foolish. We are destined to be guided by the hands of the least thoughtful and caring. And so though I would prefer we carry on with great caution and with a sane, level head about the benefitsand threats - we won't.
If I was an AI researcher, I’d also love to stifle competition
If we could go back in time and stop the development of thermonuclear weapons, would we? I would. We started with a bomb; developed it into competing arsenals; and suddenly we had a genuine existential threat.
There are lots of factors threatening humanity - including AI - but as long as politicians continue to run on cycles where they can't deal with obvious dangers (like climate change) after decades, while information, and disinformation, spreads in minutes. I don't think shutting things down for 6 months is going to make the slightest difference.
Shut it down. The thing created is only as great as the creator and given the fact that it is created by humans with all their failings it will result in nothing but evil.
Let it run free--if you read the book series Scythe--I feel they predicted a computer to run everything and overall the outcome is good.
I'm skeptical that we could coordinate globally to regulate AI when we can't seem to do the same with another existential threat caused by easy access to technology: climate change. The only path I see through either of these dangerous territories is to promote philosophies that encourage every human toward self-actualization, leading each of us (most importantly our leaders) to prioritize our species (especially our most vulnerable people) over specific people or countries. I'm sure that sounds naive, and it is, but I'm not sure what else to do. Also, echoing Hezekiah Holland's comment, AI _could_ be harnessed to _resolve_ existential threats like climate change.
Evevyone needs to go back and read and Harari’s Homo Deus.. and consider the paths ahead for humanity - there is one path that lifts up the value and value of our diversity and creativity... we must protect that rather than trying to out run the machines. https://www.ynharari.com/homo-deus-after-god-and-man-algorithms-will-make-the-decisions/
Pause for 6 months, access, and decide for next 6 months.
I’m excited by the potential of AI. However, the MIT 70s LtG model continues to offer a foundation for considering that many factors threaten the earth and humanity. Focusing on AI as the current source of demise is, at best, a distraction and, at worse, attempting to slow its growth denies us the opportunity to explore the technologically enabled paths for overcoming other threats. Trying to stop AI is a naive and foolish idea. Equally silly is not exploring how to harness its potential and contain its risks.
Let It Be