Preparing For The ‘Golden Age’ Of Artificial Intelligence And Machine Learning

  • Post author:
  • Post category:AI
  • Post comments:0 Comments
You are currently viewing Preparing For The ‘Golden Age’ Of Artificial Intelligence And Machine Learning

Can companies rely on the judgments that AI and ML are making in greater numbers? More checks and balances are required for these judgments – IT executives and experts must guarantee that AI is as fair, unbiased, and accurate as possible. This entails additional training and increased investment in data systems. According to a new poll of IT executives, organizations need additional data engineers, data scientists, and developers to meet these objectives.

The poll found that AI and machine learning projects are at the forefront of most businesses. When ZDNet conducted the poll in August, over half of the represented companies (44 percent) had AI-based technologies actively being developed or implemented. Another 22% were working on new projects. Efforts in this area are still in their infancy – 59 percent of surveyed businesses have been experimenting with AI for less than three years. Executives, CIOs, CTOs, analysts/systems analysts, enterprise architects, developers, and project managers were among those who responded to the survey. Technology, services, retail, and financial services were among the industries represented. The sizes of the companies varied.

Swami Sivasubramanian, VP of machine learning at Amazon Web Services, refers to this as the “golden era” of AI and machine learning. This is because this technology is “becoming a key component of enterprises all around the world.”

IT teams are taking the lead in such initiatives, with most businesses creating their systems in-house. Almost two-thirds of respondents, 63 percent, said their AI systems are created and maintained in-house by IT employees. Almost half, 45 percent, also subscribe to AI-related services via Software as a Service (SaaS) providers. Another 30% utilize Platform as a Service (PaaS), and 28% employ outside consultants or service businesses.

With AI and ML-driven output, chief digital officers, chief data officers, or chief analytics officers typically take the lead, with 50 percent of respondents identifying these executives as main decision-makers. Another 42% believe individual department heads take a role in the process, and 33% of surveyed businesses have corporate committees that oversee AI. One-third of these businesses delegate AI and ML tasks to data scientists and analysts. Surprisingly, CIOs and CTOs only have a 25% stake in the firms of the respondents.

Implementation is Difficult

One of the most significant challenges that companies confront while developing and sustaining AI-driven systems is a lack of available skills. Almost two-thirds of the companies polled, 62 percent, said they couldn’t find people to match the capabilities required for AI initiatives. More than half, 54 percent, say it’s been challenging to integrate AI within their current organizational cultures, and 46 percent say it’s been tough to secure financing for the initiatives they want to implement.

Data engineering is the most in-demand talent to assist AI and ML projects, with 69 percent of respondents citing it. Because AI and ML algorithms are only as good as the data put into them, people with data experience are critical in verifying, cleansing, and ensuring rapid data delivery. Aside from data engineering, businesses require data scientists to create data models and developers to create algorithms and accompanying apps.

Almost half of companies (47%) acquire additional processing capacity from a third-party or cloud provider. This is the most often purchased hardware category in the domain of technology. Only 11% of businesses buy hardware or systems for on-site installations. At least 42% use the Internet of Things (IoT) devices and networks to help their AI initiatives. In terms of AI-related software, 47 percent use analytics engines like Apache Spark. Another 42% are experimenting with large data clustering systems like Hadoop, and 42% are using sophisticated databases.

“For many newer users, analytics is not a skill that they have, which results in outsourcing as a viable alternative,” SAS’s director of AI and analytics, David Tareen, agrees. For example, well-established and well-understood analytic operations may require outside assistance for “micro-targeting, finding fraudulent transactions.” According to Tareen, new initiatives needing new data sources, as well as creative and sophisticated analytics and AI approaches, may include computer vision or conversational AI. “The project requires complete transparency on algorithmic decisions. These types of projects are more difficult to execute, but also offer new sources of revenue and unique differentiation.” 

AI Bias

In recent months and years, there has been much discussion on AI prejudice, with some claiming that AI algorithms perpetuate racism and misogyny. Furthermore, relying on AI presents a trust element, since corporate executives may delegate critical decision-making to unattended systems. How far have corporations progressed in their attempts to ensure fairness and remove bias in AI results? According to the statistics, they aren’t that far along, with 41 percent of respondents stating that there are little if any, checks on their AI output or that they aren’t aware of such checks taking place. Only 17% said they do ongoing checks on AI output.

“Most of the required practices exist,” in many situations, especially in highly regulated industries, according to Nevala. “However, they haven’t been employed against analytic systems in a coordinated manner. Even more often, reviews are limited to point-in-time checks — profiling training data, validating model performance during testing or periodic reporting of business outcomes.” 

However, the difficulty is determining whether a particular prejudice is fair or not, according to Nevala. “This is where fairness comes in. Adding to the complexity, an unbiased system may not be fair, and a fair system may be biased — often by design. So, what is fair? Are you striving for equity or equality? Do your intended users have equal access or ability to use the solution? Will the human subject to the solution agree it is fair? Is this something we should be doing at all? These are questions technology cannot answer. Addressing fairness and bias in AI requires diverse stakeholder teams and collaborative teaming models. Such organization’s models are emerging, but they are not yet the norm.”

The process must be transparent and accessible to all decision-makers, both inside and outside of IT. According to Kathleen Featheringham, director of artificial intelligence strategy at Booz Allen, developing responsible AI necessitates specialized tools and a supporting governance structure to properly weigh the benefit-to-risk trade-offs. “These are foundational elements required to put responsible AI principles and values into practice. Organizations must be able to make all data available in a descriptive form to be continually updated with changes and uses to enable others to explore potential bias involving the data gathering process. This is a critical step to help identify and categorize a model’s originally intended use. Until this is done in all organizations, we can’t eliminate bias.” 

There are several ways that IT executives and AI advocates may assist in addressing challenges with AI actionability and accountability. 70% are exploring continuous integration/continuous deployment (CI/CD) techniques to their AI and ML work in order to provide regular checks on the composition of algorithms, accompanying apps, and the data flowing through them. DevOps, which coordinates and automates the work of developers and operations teams, is present in 61% of businesses. AIOps, which stands for artificial intelligence for IT operations and is used to manage IT efforts, is employed by more than half of the organizations polled (52 percent ). DataOps, which aims to control and automate the flow of data to analytic platforms, is used by 44% of businesses, as are agile computing techniques by 43%.

MLOps Methodologies

MLOps, which is related to these techniques, is advocated by Chris McClean, director and global lead for digital ethics at Avanade, as a route to efficiently deploy and maintain machine learning models in production. “MLOps methodologies not only avoids the common mistakes we have seen other companies make but sets up the organization for a future full of continuous successful AI deployment,” McClean says. He also argues for the widespread use of automation and automation technologies in order to “better measure and improve KPIs.” 

According to industry experts, the following stages are necessary for a successful AI and ML journey that ensures confidence and viability in the delivery of outcomes:

  • Focus on the business problem:  “Look for places where there is already a lot of untapped data,” says Sivasubramanian. “Avoid fixing something that isn’t actually broken, or picking a problem that’s flashy but has unclear business value.”
  • Focus on the data: Sethi suggests implementing a “modern data platform” to provide the fuel that allows AI and ML to function. “Some of the key areas we have seen clients begin their AI journey are in sales optimization, customer journey analytics, and system monitoring. To further drive scale and accessibility to data, establishing a foundational data platform is a key element as it unlocks the structured and unstructured data and that drive these underlying use cases.” Furthermore, according to Sivasubramanian, data management might consume the bulk of the work of AI teams. “When starting out, the three most important questions to ask are: What data is available today? What data can be made available? And a year from now, what data will we wish we had started collecting today?”
  • Work closely with the business: “IT delivers the infrastructure to model the data, while subject matter experts use the models to find the right solutions,” says Arthur Hu, senior vice president, and CIO at Lenovo. “It’s analogous to a recipe: It’s not about anyone ingredient or even all of the ingredients; it takes the right balance of ingredients working together to produce the desired result. The key to ensuring that AI is used fairly and without bias is the same key to making it successful in the first place: humans steering the course. AI’s breakthroughs are only possible because experts in their fields drive them.”
  • “Watch out for AI “drift:” Reviewing model findings and performance on a regular basis “is a best practice companies should implement on a routine basis,” says Sivasubramanian. “It is important to regularly review model performance because their accuracy can deteriorate over time, a phenomenon is known as model drift.” Aside from identifying model and concept drift, “companies should also review whether potential bias might have developed over time in a model that has already been trained,” Sivasubramanian says. “This can happen even though the initial data and model were not biased, but changes in the world cause bias to develop over time.” Demographic shifts within a sampled group, for example, may result in out-of-date findings.
  • Develop your team: Wide-scale training “is a critical aspect of achieving responsible AI, which combines AI adoption, AI ethics, and workforce development,” says Featheringham. “Ethical and responsible AI development and deployment depends on the people who contribute to its adoption, integration, and use. Humans are at the core of every AI system and should maintain ultimate control, which is why their proper training, at each leadership level, is crucial.” According to McClean, this includes well-targeted training and awareness. “IT leaders and staff should learn how to consider the ethical impacts of the technology they’re developing or operating. They should also be able to articulate how their technology supports the company’s values, whether those values are diversity and inclusivity, employee well-being, customer satisfaction, or environmental responsibility. Companies don’t need everyone to learn how to identify and address AI bias or how to write a policy on AI fairness. Instead, they need everyone to understand that these are company priorities, and each person has a role to play if they want to succeed.”

Finally, “creating AI solutions that work for humans also requires understanding how humans work,” says Nevala. “How can humans engaging with AI systems influence their behavior and performance? And vice versa. Critical thinking, navigating uncertainty, and collaborating productively is also underrated yet key skills.”

Leave a Reply