Generative AI is just words that statistically belong together
The more I contemplate generative AI and its role in the automation field, the more evident it becomes to me that it all boils down to the data.
There are two main scenarios. The first scenario involves using AI to predict how data should be transformed based on previous data transformations. The second scenario involves training the underlying models by integrating existing data sources to train the model.
"Generative AI is simply a collection of words that have a statistical association with each other." Reflect on this statement.
Based on the source data used to train the model, one can make statistical predictions. However, it is important to note that there is a significant amount of bad and incomplete data available. This is one of the reasons why automating the flow of data is necessary, as it helps to eliminate human errors that can occur during manual data entry. It is important to understand that automation alone cannot fix bad data; other approaches are required for data cleansing. This raises questions about how this will work in an enterprise setting. Automation, master data management, and data cleansing will become more important than ever before.
I suppose using AI services as a service trained on multiple datasets from various companies could be effective. However, would a large enterprise be willing to intentionally train a shared model that could benefit their competitors, especially if they have the resources to train their own model? Additionally, what about the undisclosed training that likely occurs when using cloud services without thoroughly reading the terms and conditions?
In order to address these concerns, it is crucial for enterprises to carefully evaluate the risks and benefits of using, not only AI services provided by others, I would also argue any service that processes their data. They should thoroughly review the terms and conditions to understand the extent to which their data will be used for training purposes. Additionally, enterprises should consider investing in their own AI infrastructure and data training to maintain control over their proprietary information and ensure the confidentiality of their data. Finding the right balance between leveraging shared models and protecting sensitive information will be a key challenge for enterprises in the era of generative AI.
Perhaps generative AI isn’t the endgame. I read the following in an article that references Yann LeCun, chief scientist for Meta.
"He argues that generative AI lacks the ability to truly understand the world and is limited to regurgitating existing information rather than conceptualizing abstract ideas."
Source: https://www.globalvillagespace.com/generative-ai-has-reached-a-dead-end-yann-lecun/
Of course he went on to, and I am quoting the same source: “ He unveiled Meta’s latest AI project called the image-based Joint Embedding Predictive Architecture (JEPA). This ambitious endeavor aims to surpass the capabilities of generative AI, enabling machines to conceptualize abstract ideas instead of regurgitating existing online information.
My closing thought is that AI is here to stay. It will change our lives, and it is truly exciting to think about the future being even more magical than Generative AI is today.
In the meantime Watson.x solves the privacy issue.
#ibmchampion