AI will have to win MacGyver’s well-known trick, adding for autonomous AI cars

MacGyver solved it again!

Who or what is a MacGyver, you may wonder?

Well, a lot of people have heard of MacGyver, the TV series and the main character who manages to find a smart way out of a complicated situation, using his intelligence to design a solution from fairly everyday articles.

Fans know he’s carrying a Swiss knife instead of a gun, because he believes that using his creativity and inventiveness will allow him to deal with any unfortunate case (the knife is convenient when you have to defuse a bomb or when you want to disarm a toaster and reuse its electronic parts for an absolutely different purpose and, in the end, save your life).

It turns out that you don’t necessarily want to have noticed the screen or seen YouTube clips and you know what it means to be a “MacGyver” versus a complicated task (this is now a component of our Spoken Lexicon).

In short, we now have any kind of cutting-edge solution as a MacGyver approach, assuming it’s a sublime solution and something undeniable for a probably intractable problem.

Let’s look at that statement.

A very important detail is that the challenge itself will have to be worrying.

If the challenge is undeniable and not full of complications, you can solve it with your normal dog and you won’t want to wear a Reflective Cap similar to MacGyver.

Another important facet is that the solution cannot be obvious.

In other words, if a monkey can see without delay how to solve the problem, you don’t want to have interaction in the problem-solving stratosphere, but you can make a move to solve the problem.

Okay, the challenge will have to be, therefore, complicated or insoluble, and the solution will have to be non-obvious and requires an eye of mind effort to find it.

What else?

The challenge wants to be solved.

This is vital and difficult to know at the beginning of the troubleshooting process.

Often, when you are challenged or emerge, you are sure if there are tactics to solve it.

As such, you can explore a variety of possible answers and, in doing so, observe a lot of possible answers that are feasible to solve the real problem.

In the case of the MacGyver tradition, discover a solution, which is comforting, but can’t expect to find a solution.

We can say that it would possibly be helpful to assume that a solution can be found, which can cheer you up when you’re looking to solve a thorny challenge and can also just motivate you.

For those who surrender immediately and assume that there is no solution, it is as if they have thrown away the towel and, therefore, they would not put the power to verify to find a solution.

That said, there is also the genuine global facet that, in the end, there might be no solution (unlike the TV show, which provides a satisfied finish in a fairy book).

Another peculiarity is that there may be a solution, but only time will allow the solution to be made, so you may not be able to solve the complicated situation without delay, even if you guess how it can also be resolved.

How can you find a solution and yet there is a long time left before you can solve the problem?

An example would be a candle that, when lit, burns slowly through a rope, and once the rope is cut with fire, it frees you from a trap.

In this example, it knew a viable solution, however, it took some time to implement it.

Suppose, however, that it has no apparent to appease the candle.

This becomes another form of challenge, similar to the biggest challenge of possibly being trapped. This is a “new” challenge in the sense that it is similar to your solution and may or may not be a direct component of the original challenge of being trapped.

Maybe there’s a box of fites in the other room, which if you can succeed in it, you’d have to soften the candle and then burn through the rope to loosen you from the trap (reminiscent of a Rube Goldberg arrangement).

What you are proposing now stops in the search for those matches.

It may be good weather and it turns out that the matchbox falls off a table due to a gust of wind rising, knocking down the matches, one of which rolls into its booby trap area.

Anyway, the fact is that it’s not necessarily the case that you can make a MacGyver right away and you should allow time to move forward for a solution to be viable or emerge (for a TV screen, you have to finish the solution in a timely manner, since the screen only lasts thirty minutes or an hour (while in real life , things can last much longer).

To be a true MacGyver scenario, we hope that the solution will be undeniable and sublime at the same time.

This criterion of elegance can be difficult to summarize and in words. This is one of the things through which, if you see it, you can make a decision about whether it was sublime or not (as if the good looks were in the viewer’s eye).

In the TV show, MacGyver faces a complicated life-or-death situation, but for the most genuine programs of MacGyver’s approach, it’s not regularly about life-and-death issues. The fact is that, infrequently, MacGyver is suitable for ordinary problems involving sensitive problems, while in other cases, more vital problems may be at stake.

This strikes us as much as how we think as human beings, as well as how AI systems are designed and the limits of what they have achieved to date. Keep in mind that today’s artificial intelligence isn’t even close to being anything equivalent to true human intelligence, which could be shocking for some to achieve, however, it is.

Of course, there is a portfolio of conditions in which an artificial intelligence application has been able to carry out a task, as well as a human power, are constantly in the news. However, this is far from being able to provide a complete diversity of intelligence and pass any kind of Turing verification (a prominent artificial intelligence approach to verify and determine if an artificial intelligence formula is capable of looking like human intelligence, see my research on the link here).

Current AI formulas tend to be classified as narrow AI, meaning they can eventually “solve” a limited problem, while such AI formula is not AGI (General Artificial Intelligence) and lacks human qualities such as common sense reasoning. (see link here).

In fact, a vital fear about the progressive use of Machine Learning (ML) and Deep Learning (DL) is that those computational style matching rule sets have a tendency to be fragile, most likely going into tune when faced with exceptions or Case Matrix Any scenario that requires or deploys a MacGyver is by definition an exceptional or rare case (otherwise , would use some other set of brute force rules or solution methods).

Here’s an appealing question to ask: “Will the advent of real autonomous self-driving cars based on artificial intelligence be potentially delayed through exceptional circumstances and, if so, can the use of MacGyver-like assistance approaches succeed on those obstacles?”

The answer is yes, so-called excessive instances (another term for an exception or instances) are a primary fear related to protecting genuine autonomous cars, and yes, if artificial intelligence systems can use MacGyver-like capabilities, this can just help. to deal with those difficult times.

Let’s see what’s going on and let’s see.

The self-driving car

It’s to explain what I mean when I’m talking about real AI-based self-driving cars.

Real self-driving cars are the ones that AI drives all alone and there is no human assistance for the driving task.

These cars without driving force are considered grades four and 5, while a car that requires a human driving force for percentage driving effort is considered a point 2 or 3. Cars that represent the percentage of the driving task are described as semi-autonomous and typically involve a variety of automated add-ons called Advanced Driver Assistance Systems (ADAS).

There is still a genuine self-driving car at point 5, which we even know if this will be possible or how long it will take to get there.

Meanwhile, Level Four efforts are gradually seeking to gain some traction through tests on very narrow and selective public roads, there is controversy over whether such evidence deserves to be allowed according to the se (we are all guinea pigs of life and death in an existing context). .on our roads and roads, some point out).

Since semi-autonomous cars require a human driver, the adoption of such cars will not be very different from driving traditional vehicles, so there is not much new in itself on this issue (however, as you will see in a moment). , the following issues apply).

In the case of semi-autonomous cars, it is vital that the public be aware of a disturbing facet that has happened in recent times, namely that despite those human driving forces that continue to publish videos of themselves falling asleep at the wheel of one point. 2 or 3 cars. Array will have to prevent us from deviating ourselves thinking that the driving force can divert its attention from the task of driving while driving a semi-autonomous car.

You are to blame for driving the vehicle, regardless of the automation point that may be thrown at point 2 or 3.

Autonomous and MacGyver

For the true autonomous vehicles of point four and five, there will be no human driving force involved in the task of driving.

All occupants will be passengers.

The AI is driving.

To date, efforts to design self-driving cars have sometimes been linked to AI being able to drive in relatively difficult driving situations.

This makes sense, that is, to make things “easier” first (for clarity, none of this is easy), implying that the AI driving formula is able to drive in a quiet community or in a daily traffic environment.

In addition, if you use the driving knowledge collected to drive an ML/DL system, it is likely that the maximum driving knowledge is mainly related to daily driving and is at a disadvantage with respect to those driving opportunities.

Think of your own efforts.

Most of the time, he drives, thinks about what he’s going to eat that night or repeats in his brain the complicated verbal exchange he had with his boss the other day, without paying much attention to the sidewalk.

This “reckless” behavior on a regular basis.

Then there are the weirdest moments (hopefully, rare), where anything ordinary takes you away from your complacency and you will have to answer without delay.

Possibly it would be a life-and-death scenario that involves having to evaluate in real time a complicated challenge you face in the context of traffic and that you want to evaluate what its characteristics are, in addition to having to adopt those characteristics soon. enough and enough to save him death or destruction.

All in a whisper of a moment.

Most will admit that today’s AI driving formulas are decidedly unchanged from dealing with those times if the challenge is not a challenge that the AI driving formula has not been “seen” before or has not been preprogrammed to handle. .

A new or wonderful scenario is not smart for artificial intelligence driving systems at this time, and is not smart for human passengers, pedestrians or other cars powered by nearby humans.

What to do with these problems on board?

The same old reaction is to continue progressing on road tests and collect a lot of driving data, and I hope that eventually all the permutations and imaginable probabilities of driving conditions have been captured and then probably analyzed so that they can be processed.

We’ll have to distrust this approach.

Waymo, which has accumulated about 20 million kilometers of roads in total, and this is at first glance an impressive number, remains in the brain that humans travel more than 3 billion kilometers consistent with the year, and hence the chances of locating a needle in a haystack of many fewer kilometers is probably less likely.

Connoisseurs in the auto-auto industry also know that miles are not just miles, regardless of which car manufacturer conducts road tests, meaning that if you drive and return in the same places, those miles are not necessarily as revealing as driving more radically. conversion and various roads and road situations (this is partly a complaint about so-called disconnect reports on disconnected cars on driverless cars disconnected, see my research on this link here).

Another proposal is to perform simulations.

Automakers and autonomous generation corporations have a tendency to use simulations in addition to driving on the roads, there is an ongoing debate about whether simulations deserve to be performed before allowing the use of public roads or whether it is walkable to do so at the same time. time, and there is also a debate about whether the simulations are good enough to update the kilometers traveled (again, it depends on the type of simulation performed and how it is built and used).

Some of the AI driving systems deserve to have a MacGyver-like component, able to cope with the usual disorders that occur while driving.

It would not be based in particular on past conditions of strange or integrated cases, but rather on a generalized component that can be invoked when the rest of the AI driving formula could not adapt to a gaming situation.

In a way, it would be like AGI, but particularly in the field of driving.

Is that possible?

Some argue that AGI is AGI or that it seeks to recommend that you simply create an AGI for an express domain is contrary to the concept of AGI as a whole.

Others argue that when sitting in a car, a human driving force uses AGI in the car driving box, does not solve global hunger or has to deal with some problem, and therefore we can focus attention on an AGI for driving. box alone.

Hey, maybe we’ll apply MacGyver to the challenge of solving excessive instances and come up with a sublime solution to do that, which might or might not be using a MacGyver in the DRIVING system AI.

It’s a twist, of course.

Conclusion

A practical article on demanding AI situations to solve demanding MacGyver-type situations was written by University of Tufts, Sarahthy and Scheutz researchers (here’s the link). The authors note that an AI formula can probably take many arduous responsibilities and sub-responsibilities in the execution of any MacGyver-like situation, in addition to being able to perform impasse detection, domain transformation, challenge restructuring, experimentation, discovery detection, domain extension, and so on.

Essentially, it’s very difficult to get an AI formula that acts like MacGyver, whether a Swiss knife is available.

In the case of an AI driving system, keep in mind that the MacGyver component deserves to act in real time, having only a few seconds for the action to be performed.

In addition, the measures taken are probably similar to the consequences of life and death, also adding the relevant scruples with the challenge of the carriage (this means allowing options between deaths or injuries to be suffered and other sets of deaths or injuries, see my explanation in the link here).

If you say that we are not looking for a capacity similar to MacGyver, the apparent question arises of what opportunities we have, and in the meantime, autonomous cars are advancing, in the absence of such inventive or even similar capacity.

There is also confidence that if we simply make MacGyver for the AI behavior box, we could start expanding it to other areas, allowing a step-by-step implementation of an AGI in all areas, although that is quite debatable and a story for another day.

MacGyver is known for saying he can do whatever he needs if he thinks so.

Can we get AI to do everything we have to do if we think about it?

Time will tell.

Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive

Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced executive and high-tech entrepreneur, he combines industry hands-on experience with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. Former PROFESSOR of USC and UCLA, and director of a pioneering artificial intelligence lab, speaks at primary events of the artificial intelligence industry. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media such as CNN and has co-hosted the popular radio show Technotrends. He has served as an advisor to Congress and other legislative bodies and has won quite a few awards/recognitions. He is part of several director forums, has worked as a venture capitalist, angel investor and mentor of marketing founders and startups.

Leave a Comment

Your email address will not be published. Required fields are marked *