Which Of The Following Is Not True About Deep Learning

In other words, no one is perfect – not even executives who are trusted to have all the answers and steer the ship in the right direction. Perhaps that’s why we’re starting to see that, in some instances, AI (and deep learning specifically) should be used to compensate for our physical and computing limitations.

As humans, we can only see so much, do so much, and think so fast before we start glitching. And no matter how hard we try to be perfect at work, we are always going to have biases or disadvantages. For example, we cannot always tell when a bottle is deformed or a pill’s markings are more orange than red. We may also struggle to make the right decision because we lack the full context of a situation.

However, we can make these limitations a nonissue by training AI to see things, connect the dots between seemingly disparate data points, and make decisions in ways that we can’t. This reduces our risk of getting things wrong, which is huge when you think about what it takes to succeed in business.

That said, AI is only as smart as we allow it to be. It can’t help us if we don’t teach it how to help us. That’s why it’s so important that you get the AI model – and the training model – right when you decide it’s time to ask for AI assistance with certain business tasks. This is especially true when using AI to inform, or make, decisions (such as a pass/fail decision during quality inspections).

Refer to more articles:  A Solution In Which The Solvent Is Water

So, let’s talk about what you need to know before you spend any money on deep learning tools (or any AI-powered automation assistants).

Deep Learning 101

There are a lot of terms being tossed around in relation to AI, including deep learning, machine learning, and neural networks, among others. So, you’re probably wondering, “What’s the difference between machine learning and deep learning?”

Technically, deep learning is a subset of machine learning, as Dr. Yan Zhang explained in this post, and more likely to be used when “the dimensionality of the data and the complexity of the model are too big to handle.” Take face detection, for instance. It is true you could reduce the dimensionality using traditional approaches such as principal component analysis (PCA), and this was done in the famous Eigenfaces approach. But PCA only offers a linear model, which cannot compete with the non-linearities of today’s Deep Networks when applied to megapixel images.

For example, at Zebra, we use deep learning when we’re helping customers…

  • Focus a machine vision system on items – or certain qualities of items – for inspection.

  • Improve worker safety. In this case, we may use deep learning in conjunction with cameras to detect when workers enter unsafe areas, get too close to machinery or don’t have the proper personal protective equipment (PPE) on.

  • Predictively schedule maintenance action to prevent downtime of equipment and systems.

  • Determine what parts they need to make when and where so they can more effectively schedule production and prevent either delivery delays or inventory waste.

Refer to more articles:  Which Statement Is True About Science And Society

In fact, we’re using a deep learning-based solution to help a customer in the fast-moving consumer goods (FMCG) space automate and improve the efficiency of their returns process. Before returned items can be put back into circulation, their lot number must be logged and expiry date verified by a worker. Small font sizes, poor mark quality, and the use of low-contrast text made this a time-consuming and unpopular job, creating bottlenecks and waste as products expired before they could be returned to the shelves. The deep learning-enabled solution we worked with them to implement now allows workers to verify these details automatically by showing the item to a camera, resulting in improved efficiency and reduced waste.

Now, because deep learning is the training of neural networks, it can learn through either supervised or unsupervised processes – much like we do as humans. We learn both in a structured educational setting (i.e., school, professional development courses, etc.), and as we go about our day. Every new experience we have and every interaction with a person can amount to “training”. We’re taking away more information or a different perspective.

The difference between how the human brain learns and how deep learning occurs within a neural network is that the AI/neural network’s “learning” is occurring in a totally controlled environment. We (humans) are transferring what we’ve learned from our brains to the AI/neural network to help it understand right from wrong. It’s doing what we’re telling it to do, in a way. For example, we train the AI system used to inspect semiconductors coming off a fab using “good” and “bad” images using deep learning. We teach it what to look for so that it can eventually work autonomously.

Refer to more articles:  Which Grima Died In Katla

Why use deep learning, or AI at all, for inspections when you can just teach a person what’s good or bad? Well, it all boils down to our physical and computing limitations and the need to make them a moot point.

If you really want to feel confident that what you’re shipping to a customer is of the highest quality, or that the quality of a product hasn’t degraded along its supply chain journey, you’re going to need to scrutinize it like no human can. You’re going to need some form of AI to quickly and perfectly inspect it – most likely using a combination of cameras and AI-based software, such as a machine vision system or even a fixed industrial scanner with deep learning optical character recognition (OCR) capabilities.

Which Deep Learning Model is Best?

There isn’t a standard rule that says, “This type of deep learning model is going to be universally applicable in this type of workflow or business setting.” However, these are the models we typically lean into when we have certain objectives:

  • Deep Learning Optical Character Recognition (OCR): This is an easy way to automatically read the text on an image/item, such as a lot number, part number or expiration date. What’s cool about this capability is that it can be deployed out of the box within 5 minutes. It doesn’t have to be trained, and you don’t need a skilled data scientist to get it online. You can read more about how it works here and actually see it in action here:

Related Posts

Which Leg Wear Anklet

Anklets are the perfect accessory for people who don’t like large flash jewelry.You may be interested Which Factor Affects Congressional Approval Ratings The Most Which Ancient Game…

Which Plants Produce The Most Oxygen

What are the Highest Oxygen-Producing Plants? The highest oxygen-producing plants include Boston ferns, weeping figs, aloe vera, spider plants, gerbera daisies, areca palms, peace lilies, golden pathos,…

Which Is True About Scientific Inquiry

One thing is common to all forms of science: an ultimate goal “to know.” Curiosity and inquiry are the driving forces for the development of science. Scientists…

Which Zodiac Sign Is The Oldest

Which Zodiac Sign Is The Oldest

The Ancient Greeks — along with other civilizations of the time — widely believed in a now-iconic phrase: “As Above, So Below.” In other words, the Greeks…

Which Of The Following Is A Characteristic Of Beta

What Is Beta? Beta is a measure of a stock’s volatility in relation to the overall market. By definition, the market, such as the S&P 500 Index,…

Which Is Better Graphite Or Fiberglass Pickleball Paddle

Fiberglass vs Graphite Pickleball Paddle The pickleball arena resonates with the constant buzz of energetic gameplay and the clink of paddles. Among the myriad of considerations for…