Good or Bad AI-based solutions?

Artifical Intelligence
Share on facebook
Share on linkedin
Share on twitter
Share on pinterest
Share on email

Many years ago, Stephen Hawking, one of the most brilliant minds of the first decades of the XXI century said about AI: “The real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in real trouble.”

For many years, it has been much debate about the intrinsic good or badness of AI-based solutions. Are we destined for a dystopian future where AI decides we are no longer good enough?  Yet perhaps the biggest risk in the short to medium term is the human malicious trends to corrupt the AI-based solutions? In any case, we must develop the proper governance and standards that allows us to have confidence in the safety and trusty AI applications.

According to a Tutorial Point article, “Good AI is something which perfectly understand the user expectation and deliver the right product to them. Bad AI is something which doesn’t understand the human expectation and display unwanted promotions in the unwanted place [1].”

Despite the previous concept, an aspect to considered is, there is no implicit good or bad to AI it will simply respond with results that are derived completely by its learning.  The good or badness of AI will thus be based on how well we train the AI, and perhaps most importantly how well we test the AI. There is no doubt that the current engineering focus considers safety, but we simply don’t know enough about the way the risks will emerge. Even in cases where AI has no safety impact it may still create an ethical and moral dilemma by perpetuating unsustainable behaviors and points of view. In order to eliminate this gap from current AI-based implementations, several new Technology have been tempered by establishing controls and practices that make the technology safe.  Historically this has been through both thoughtful design and responses to disasters. We can learn from the way that technology domains like aircraft, nuclear power and medical devices have evolved [2].

One rising application field for AI-based applications is Human Resources. AI is becoming increasingly widespread and democratized in its uses. So, it’s no surprise that HR and talent professionals are turning to AI to future-proof their talent processes and save resources. A stretched HR professional, swamped with resumes and interview scheduling while still trying to meet the needs of current employees and engage in strategic workforce planning, can turn to AI to automate routine tasks, but this is not still a easy task to be 100% lead by AI applications. A clear example of the complexity of the topic was proved by Amazon fail After Amazon trained the algorithm on 10 years of its own hiring data, the algorithm repeatedly became biased against female applicants. The word “women,” as in “women’s sports,” would cause the algorithm to rank applicants lower [3].

The following figure shows 3 ways to differentiate “good AI” and “bad AI” based on: automation vs. augmentation, data quality, and human intervention.

good_vs_bad_ai
3 ways to differentiate “good AI” and “bad AI

On the other hand, Blue Yonder and the University of Warwick have recently released a report exploring the digital readiness of today’s retail supply chains with only 15% of global retailers reporting prescriptive or autonomous supply chains driven by artificial intelligence and machine learning and how business owners could differentiate between “good” AI and “bad” AI when evaluating what’s powering your organization’s supply chain? For that, it has been defined five key aspects to be considered for a “good AI” application [4]:

  • Interconnected: AI models are built to mimic reality, but the closer the model can get to the complexity and interconnectedness of reality, the better that model is. Some AI solutions start with a base forecast and use machine learning to iteratively add on factors. However, AI solutions that make fewer baseline assumptions are stronger, as they consider how factors change and impact each other rather than expecting the same things to happen repeatedly.
  • Dynamic: AI solutions must be dynamic as reality is rapidly changing in complex ways. To effectively deal with uncertainty spurred by the rapid velocity of change day to day, good AI can keep up with changes in real-time and adapt accordingly on its own. Along with responding quickly to changing influences, Good AI also understands that forecasts are not certain. If they were, we would all be rich, but humans are not 100% predictable. Good AI is not only able to limit the unpredictability but also understand it, creating a probabilistic view of the forecast based on the individual uncertainty of each location, item, and day.
  • Explainable: An example of bad AI is “Black Box” AI, a type of model that does not allow the user to conceptualize the reasoning. it’s essentially an impenetrable system that fails to offer human collaboration capabilities. To thoroughly evaluate and trust an AI solution, the model must follow a “Glass Box” AI composition, so that the machine’s thinking can be observed and understood.
  • Automated: automation solutions can add deeper layers of transparency and understanding that can be applied across the supply chain.
  • Scalable: AI solutions must be scalable so that they can be utilized widely and effectively.

The fact is that AI is developing very rapidly, and companies and consumers must make critical decisions for the society of the future, about the proper uses of this technology, often implicitly. These decisions will affect how we work, how our companies are organized, and how we interact in the world. Some paths lead to a future where technology supports us, providing improved products and services and better workplaces. Other paths lead to fragmented work, impoverished social experiences, and a loss of privacy, but the trust is the evolution of the human rice and AI solutions will be linked more than ever in the coming years, so it is time to create a solid foundations to smarter, trusty and safety society.

In future posts, we will be continue writing about technology and business trends for enterprises. Furthermore, we recommend consulting the following literature to continue your digital transformation journey:

The objective of this blog is to provide a personal vision of how digital transformation trends will be impacting in our daily activities, businesses and lifestyle.

———————————————

[1] https://www.tutorialspoint.com/good-ai-vs-bad-ai#:~:text=Good%20AI%20is%20something%20which,data%20disturbance%20to%20the%20user.

[2] https://www.globalthoughtleaders.org/artificial-intelligence-good-or-bad/

[3] https://www.plum.io/blog/whats-the-difference-between-good-ai-and-bad-ai-in-hr

[4] https://blog.blueyonder.com/good-ai-vs-bad-ai/

———————————————

By Gerardo Beruvides twitterlinkedin2

Industry 4.0 and Smart-mobility expert, his research interest includes Industry 4.0, Smart-Maintenance, Process Optimization, Machine Learning, AI engineering and Cloud-based solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *

en_GBEnglish (UK)

Subscribe to Stay Up to Date

We would like to retain some of the information gathered from you (it will be stored on a password protected hard drive), for the purposes of keeping in touch and updating you with relevant information. This information will never be shared with a 3rd party without your direct consent. By clicking ‘Sign Up’ you understand and agree with us retaining this information to contact you.