Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Please type down (easy for me to copy and paste) your response (thoughts) after

ID: 3748945 • Letter: P

Question

Please type down (easy for me to copy and paste) your response (thoughts) after your reading the following paragraphs. (min 5 sentences, around 150-200 words)

I believe we can't build AI without losing control over it. If we are making machines smarter and smarter they will eventually gain the power to continue to grow independently. This will then lead to a higher intelligence that is not known to us humans, therefore, we can't control it. Even with the growing concerns of AI going out of control, we will continue to advance our technology but do we know when to stop before it's too late.

We should continue to evolve our technology in AI but we should create a boundary or limit. For example, a boundary would be to not connect AI to our brains. These machines will know too much about how our brains work and then will learn how to control us or do other things that will not be beneficial to us humans. If AI took control of us we would not live as we do now and would probably suffer. We might also diminish as a race.

Explanation / Answer

I loved your question and I will try my level best to give my thought in this whole scenario and if it helps please leave a like as it will notify me
that my effort was worth it.
Thank you

This is a very controversial or you can say context dependent topic because as a computer science engineer I am very fond of the things and the innovations
that we are achieving and also are going to achieve with the help of AI but I am also having the same doubts that at what extent we should gonna develop it
and what if it takes control over humans as some science fiction movies show that some time machines become hard to handle.

Lets brainstorm in this topic

There is no denying about the fact that AI implementation has totally changed the way we're looking at the development of the technologies before the innovations of AI.

I will not discuss the development that AI had done so far or going to do as you want me to complete it in 150 words lets just discuss its impact on humans.

When it comes to the possibilities and possible perils of artificial intelligence (AI), learning and reasoning by machines without the intervention of humans, there are
lots of opinions out there.

The development of full artificial intelligence could spell the end of the human race…It would take off on its own, and re-design itself at an ever-increasing rate.
Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.
This was said by the one and only Stephen hawkings who was always ahead of every technology that human can think of so it can be concluded from this only that there are
certain limit to the development of AI and we need to know about that limit until it gets too late.

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate and that there is no reason to expect AI to become intentionally benevolent
or malevolent.
See we can't say that robot will have feelings just by watching some fiction movies so when we will look at the threats we must not consider all this scenario.

Consider the situation where
The AI is programmed to do something devastating, Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could
easily cause mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply turn off, so humans could possibly lose
control of such a situation. So it does have a solution.

The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal. This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern.
The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. Many experts take seriously the possibility
of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away.

Since it may take decades to complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave.
We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to outsmart us. The best example of what we could face
may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest.
the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

We are still in progress and there are just speculations about this and still, there is the huge amount of time left.

Sorry it becomes a bit longer because this topic as said is controversial and we are talking about the future and in fact, more than 30% people in the USA don't have any idea about AI.
////
As a conclusion, I would say that people need to keep in mind that we humans create technology. AI, by itself, is not looking to destroy humanity.
We can manipulate them and by creating a limit we can justify there existence but that development is not going to be implemented sooner as we are still very new to all this.

regulators across the world need to be working closely with these academics and citizens’ groups to put brakes on both the harmful uses and effects of AI.
Some parts of this will involve laws regulating data which fuel AI, and governments which are using AI, and some other parts will involve bans on certain kinds of uses of AI.

Most ethical questions around AI involving liability for independent decisions made by AI might not be questions we need to answer as of now, given that we are still far from
strong AI, we still have difficult questions to be asked about harms caused by AI, everything from joblessness to discrimination when AI is used to make decisions.