So Much Uncertainty – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s learn So Much Uncertainty – InApps in today’s post !

Read more about So Much Uncertainty – InApps at Wikipedia

You can find content about So Much Uncertainty – InApps from the Wikipedia website

Much work, and many tools, are still needed to integrate artificial intelligence into the software engineering workflow, noted Peter Norvig, Google’s director of research, speaking at the O’Reilly Artificial Intelligence conference in New York last week.

Fundamentally, AI software is inherently different from other forms of widely used software, said Norvig, who is also a co-author of perhaps the most popular book of programming instruction for the field, Artificial Intelligence: A Modern Approach.

“One way of looking at the traditional model of programming is to look at the programmer is a micro-manager, who tells a computer exactly how to do something step by step,” he said. With AI, we should look at the programmer more as a teacher, rather than a micro-manager.

This will require big changes in how programming is done, and the tools needed to program easily. For fundamentally, AI programming is chiefly about the modeling, not about the code itself.

“We spent the last 40 years building up tools to build these programs, to deal with texts in a good way,” he said, referring the color coding, Intellisense, debuggers, and other features in code editors. “But right now we are creating models instead of text, and we just don’t have the tools to deal with that. We need to retool the industry.”

And just like we need to change the tools, we also need to modify programming processes. The typical approach of debugging may not work well for artificial intelligence, though. “When you get a problem, you give it more training data and it begins to converge to a better answer. But the bug hasn’t gone away. It’s just been hidden, and it may come back,” he said. “We don’t have a way of closing bugs out with machine learning in the same way that we do with traditional debugging.”

Read More:    4 Ways to Avoid Node.js Deployment Pitfalls on ‘Day One’ – InApps 2022

The software release cycle could change. Models could be updating themselves and changing on the fly.

You Can’t Handle the (Probabilistic) Truth

The most important difference with AI for the programmer though, is that there are a lot of issues around uncertainty, he explained.

“AI systems are fundamentally dealing with uncertainty whereas traditional software is fundamentally trying to hide uncertainty,” Norvig said.

Traditional programming is fundamentally about binary options, true or not true. You take $100 out of your checking account, the bank has a mathematically definitive proof that it has $100 less of your money. But AI-driven programs can be more wishy-washy with the truth. Take a program that detects bank fraud for instance.

“That’s a fundamentally different process. You can’t say for sure that ‘this is fraud’ and ‘this is not.’ We can only say that probabilistically,” Norvig said. “This is something we have to discover, rather than something that we are given.”

Or take an example of a self-driving car. An ML-based approach would slow a vehicle by applying the brakes to a certain degree. But we never know exactly how heavy on the pedal the AI program will be. Road conditions, brake wear-and-tear, and countless other factors add an ever-changing set of variables, which in turn changes the program’s response anew every so slightly each time out.

“In AI systems, we want the uncertainty to propagate through the model, whereas in the traditional software, we don’t,” Norvig said.

Regular software has great properties of compositionality, whereas all the functions work together in a harmonious whole. AI programming doesn’t pass around individual immutable variables, but rather passes around the full probability distribution.

“We can make a small change — swap out this speech recognition module for this other one — and that’s no longer isolated to that one component in the text of the program. That’s now the control flow everywhere, all the way through the translation process,” he said.

Read More:   Top 5 Security Risks for Infrastructure-as-Code – InApps 2022

“And we are not used to dealing with that. We don’t have the tools for expressing that,” he said.

Probably True

We have options, the Google research director noted.

One option is the cloud. Amazon Web Services, IBM, Microsoft, Google, among many others, offer AI-driven services that can easily be called from applications. A service for image scanning, for instance, can ingest an image you send by API and return a list of probable characteristics (location, level of happiness expressed by each person in the photo, etc.) in JSON. The results can then be easily handled by the rest of the non-AI program.

Trickier though are those use cases where you need to “propagate the uncertainty through other steps,” Norvig said. here, you may want to use AI itself to act as a gatekeeper, to determine which components should be used. Norvig called this the “ensemble approach.”

We could also start looking at applying large-scale data analysis, aka “big data,” to programming itself. Just as Google search can return a “did you mean” reminder to incorrectly spelled queries — using past searches to learn the possibilities — an AI-driven programmer tool could return a list of suggestions when a developer writes a line of faulty code.

Ultimately, though we’d also need programming languages that would “talk in this language of probability and uncertainty,” rather than in definitive Boolean logic. Such a probabilistic language would deal with random variables rather than fixed variables, can infer values from other values.

“Traditional programs flow only from inputs to outputs,” he said. With probabilistic programming, in contrast, you could ask the program to infer the inputs based on the outputs received.

At a certain point, some problems would be best be solved by letting the machine learning system do the entire job end-to-end. This is already happening in machine translation, he noted. Traditionally, machine translations systems were made of a pipeline of probabilistic statistical models. But now “the field has recognized that you get big improvements in a machine translation system with an end-to-end trained neural network,” Norvig said,

Read More:   The Opportunity of .NET Core and Why It Will Not Fade Away – InApps Technology 2022

“We’ll see places where the answer is to throw out the legacy traditional software, rather than maintain, wrap it, or ensemble it,” he said. “Different use cases will have different solutions,” he said.


Rate this post

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download


      Success. Downloading...