AI Next Steps

AI Next Steps

AI Next Steps

What are the next steps in AI? What about an application that you take a picture of your refrigerator and ask the application, which is using AI to give you a spicy interesting recipe based on what you have? What about if you use this service over time and the service starts to order from the store automatically your groceries. What about if the application using AI makes recommendation for new foods to try. After all, if the large training model has imported all recipes and many people who eat Salmon also like mustard, the maybe the app tells the store AI to add mustard seed your next shopping list.

The next steps in AI promise an exciting journey of innovation and progress. As artificial intelligence evolves, we can anticipate smarter, more intuitive technologies that seamlessly understand and adapt to human needs. Advances in machine learning will enable AI to grasp complex patterns, making predictions and decisions with increased accuracy. Ethical considerations will become pivotal, ensuring AI aligns with human values. Collaborations across industries will unlock new possibilities, from healthcare breakthroughs to personalized experiences. As AI progresses, continual research, responsible development, and harmonious integration with human society will shape a landscape where AI enhances our lives in unimaginable ways.

What about a new Google service AutoWrite, that reviews your email. The feature looks at who you have responded too in the past and how quickly, that tells the service how important that person is to you. The service learns your style from your previously sent emails to that person. When that person sends another email then the service creates a response email, you score the response – 1-100. Over time when the system learns your style all based on the scoring. As a user, your start to build confidence in the system and you allow any email with a potential score of 90 to be sent out.

What about a friendship app that connects you to a “dedicated connection”. The AI behind the connection has been given access to your email, text message and social networks. It wakes you up in the morning and asks what you were dreaming about because the fitness app on your wrist reported that you had trouble sleeping. The Friendship App, “Forever yours” ask was the sleeping trouble connected to your girlfriend who sent several nasty text messages last night and posted on your social networking site that you don’t hear her. The app councils you on how you feel, using therapy techniques documented on web sites that the App has learned. “Forever yours” over times feels like a real friend.

All of these services, applications or features are underway now, and more beyond that.

Conclusion

Ai is to the 2020’s as Social Networks were to 2005+. Social Network have caused world wide problems with information silos where people self-isolate themselves. It is great to be able to easily keep up on a social network with my friends, but do I really need or want 500 Happy Birthday messages? Are those messages genuine or was it the system. What about messages that are paid for that appear to be from real people that I think I know? What about if those messages are pushing me to get mad at some cause or people?

We have not figured out yet how to manage Social Networks, Ai will have a similar impact.

AI will be integrated into peoples’ lives, and there will certainly be benefits, but at what cost?

I choose to believe that humans can adapt, but I have concerns we might not enough time to understand what is occurring.

What is AI?

What is AI banner

What is AI?

AI can stand for both Automated Intelligence and Artificial Intelligence, and it’s important to understand the distinction between the two.

Imagine AI as a super-smart robot friend! AI, or Artificial Intelligence, is when computers learn to think and make decisions like humans. It’s like teaching your robot buddy to recognize colors, play games, or even understand when you’re happy or sad. AI helps machines learn from experiences and become good at tasks without being told exactly what to do. It’s like having a friend who learns and gets better every time you play together. So, AI is like magic computer brains that make our machines and robots super clever and helpful!

There seems to be a lot of confusion among professionals and laypeople regarding the difference and functionality of these two concepts.

Automated Intelligence has been in existence for quite some time. At its core, it involves predicting actions based on patterns. A typical example of Automated Intelligence is a spam filter. As a user, when you mark five emails as spam, the spam filter utilizes those marked emails as a reference to determine which other emails are likely to be spam. Future emails are then compared against the reference template and variations of the templated emails, allowing the system to learn and improve. If the system makes a mistake, the user can mark an incorrectly classified email as “good,” and the algorithm will adjust accordingly.

Automated Intelligence is task-oriented and driven, providing significant assistance in many cases. Grammar checkers are another example of Automated Intelligence. They analyze the context of words and learn the user’s patterns. While some grammar rules are universal, such as the distinction between “their” and “there,” other rules are more subjective, and a reliable grammar checker learns the user’s preferred style through their writing.

This form of Automated Intelligence is focused on specific tasks and requires human management, ultimately serving to support humans in their activities.

On the other hand, Artificial Intelligence is an entirely different concept, and it is neither truly artificial nor intelligent.

As humans increasingly document our conversations digitally, we have amassed an enormous database of raw information. Platforms like CHAT GPT were partially trained on Reddit. Consider the content on Facebook, emails, websites, and all the digital content created by humans over the past three decades. This vast pool of information represents a wealth of human interactions and reactions. Some of it is marketing-focused, legal-focused, or politically-focused, while other content is intended to evoke laughter or anger. It encompasses the entire spectrum of human emotions. Think of this extensive collection of data as a massive database.

The challenge with such a database is that not all entries carry the same weight or significance in our minds. For example, articles about constructing a bomb are treated with the same importance as articles about Mother Teresa. However, as humans, we inherently value the articles about Mother Teresa’s charitable work more than instructions on building a bomb. In a large database, our subjective prioritization cannot be easily measured, as all data points are treated equally.

Artificial Intelligence goes beyond the capabilities of Automated Intelligence, but the key differentiating factor lies in the process of “training.”

Training AI developers often highlight the complexity and elegance of the training process. At its core, training involves inputting data to allow the system to perceive and comprehend patterns. Referring back to the previous example of marking spam, this process trains the system to identify spam. In modern systems, training involves reviewing vast portions of the human-created content database.

Modern CPUs contain billions of transistors and possess tremendous processing power. AI training utilizes this processing capability, especially through parallel processing, to categorize, process, and learn the patterns of human communication. Once the system has absorbed and comprehended the patterns, it can respond with the most probable answer based on what it has learned and calculated from the review data (the human database). This extensive review process is commonly referred to as a “large model.” The “training” involves categorizing and processing each question and response.

I hope this clarifies the distinction between Automated Intelligence and Artificial Intelligence, as well as sheds light on the training process in AI systems.

Please let us know if you have any further questions or if there’s anything else we can assist you with.

Is AI hurtful or helpful?

Is AI hurtful or helpful

Is AI hurtful or helpful?

Can AI be helpful in my job? The short answer is yes, there are some tasks that AI can be really helpful with. One example, for this white-paper I asked ChatGPT “Can you make a slide deck on the helpful points of AI?” and it output the following.

Slide 1: Title

Catchy title introducing the topic of the presentation.

Slide 2: Introduction

Briefly introduce yourself and provide an overview of the presentation’s purpose.

State the benefits of AI in the workplace.

Slide 3: Automation and Efficiency

Highlight how AI automates repetitive tasks, saving time and increasing productivity.

Mention specific examples like data entry, document processing, and email filtering.

Slide 4: Intelligent Virtual Assistants

Explain how AI-powered virtual assistants can handle routine queries, providing 24/7 support and freeing up human resources.

Describe how they can improve customer service and reduce workload on employees.

Slide 5: Data Analysis and Insights

Discuss how AI can analyze large volumes of data quickly, extracting meaningful insights and patterns.

Emphasize the benefits of data-driven decision-making and improved operational efficiency.

Slide 6: Natural Language Processing (NLP)

Explain the concept of NLP and its applications.

Highlight how NLP can enhance communication, enable language translation, and aid in document summarization.

Slide 7: Enhanced Security

Discuss how AI can contribute to office security by detecting and preventing potential threats.

Mention applications like network traffic analysis and cybersecurity breach detection.

Slide 8: Intelligent Document Management

Explain how AI can assist in organizing and managing documents efficiently.

Highlight features like automatic categorization, tagging, and information extraction.

Slide 9: Workflow Optimization

Discuss how AI can analyze workflow patterns, identify bottlenecks, and provide recommendations for process improvements.

Mention how it can optimize resource allocation and enhance overall productivity.

Slide 10: Predictive Analytics

Explain how AI can utilize historical data and machine learning algorithms to make accurate predictions.

Highlight applications like sales forecasting, demand planning, and customer behavior analysis.

Slide 11: Ethical Considerations

Emphasize the importance of ethical AI implementation, including transparency, accountability, and privacy.

Slide 12: Conclusion

Summarize the key points discussed throughout the presentation.

Reinforce the benefits of AI in the office environment.

Slide 13: Q&A

Allocate a slide for audience questions and provide answers or initiate a discussion.

Not a bad starting point but the presentation is missing a number of points, for example, providing Legal or Medical research summery assistance, or improve communications. My point is as a research assistant this gives me a good starting point. You might try using this approach for your next class assignment or presentation. Using AI as a research assistant on topics you know, allows you to focus on how the content should be organized and communicated and not the basic framework.

Remember the training model – Reddit and other web sources. Those sources will have valid and incorrect information, and AI can’t tell which one is valid and which isn’t. AI is not judging the data it is learning from, it is looking how often like-minded data exists. If all web data on a topic is false, then AI will give completely incorrect information.

I urge you to tread lightly when using AI on topics you do not know, do not trust that what ChatGPT or other AI sites produce is correct, it is not a super smart brain – it is summarizing data it has absorbed and it could have easily reached the wrong conclusion.

AI training is at the point of a precocious 5-year-old who has heard a cure word. The 5-year-old who hears a curse word can’t help but to repeat it and based on adult’s feedback senses something is different about the word, so they use it a lot, until confronted and schooled by an adult.

One attorney appeared who knew the material appeared to get lazy and assumed ChatGPT was a super-brain and included the output without review and without editing is now being called in front of hte court to explain why none of the references used in his submission where valid – https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html.

So the best way to use AI is to use it as a sounding board, consultant, research assistant, not as an expert. Even human experts can be wrong, and AI can be very wrong. Properly used it can be a significant time saver.

Here is another example, what if I am having a dinner party for 4 and some of the guests are on the Mediterranean Diet, I have some basic items in my refrigerator, but I need some ideas. In the past I would have used a search engine search, which might then I would need to calculate an ingredients list, or I can ask ChatGPT; “I have eggs, milk, spices, and vegetables please give me a spicy recipe for 4 for dinner following the Mediterranean diet”. The result is;

Here’s a spicy Mediterranean-inspired recipe for dinner using eggs, milk, spices, and vegetables:

Spicy Shakshuka with Roasted Vegetables

Ingredients:

1 tablespoon olive oil

1 onion, finely chopped

2 garlic cloves, minced

1 red bell pepper, diced

1 yellow bell pepper, diced

1 zucchini, diced

1 eggplant, diced

1 teaspoon cumin

1 teaspoon paprika

1/2 teaspoon cayenne pepper (adjust to taste)

1 can (400g) diced tomatoes

Salt and pepper to taste

4-6 large eggs

Fresh parsley or cilantro, chopped (for garnish)

Instructions:

Preheat the oven to 400°F (200°C). Place the diced zucchini and eggplant on a baking sheet, drizzle with olive oil, and season with salt and pepper. Roast in the oven for 20-25 minutes until the vegetables are tender and lightly browned.

While the vegetables are roasting, heat olive oil in a large skillet or frying pan over medium heat. Add the chopped onion and sauté until it becomes translucent, about 5 minutes.

Add minced garlic, diced red and yellow bell peppers to the skillet, and cook for an additional 5 minutes until the peppers soften.

Stir in the cumin, paprika, and cayenne pepper, and cook for a minute to toast the spices.

Add the diced tomatoes to the skillet, season with salt and pepper, and simmer the mixture for about 10 minutes to allow the flavors to meld.

Using a spoon, create small wells in the tomato mixture and crack the eggs into the wells. Cover the skillet and cook for 5-8 minutes until the eggs are cooked to your desired doneness.

Remove the roasted vegetables from the oven and add them to the skillet, gently mixing them with the tomato and egg mixture.

Sprinkle the chopped parsley or cilantro over the top for garnish.

Serve the spicy shakshuka with roasted vegetables hot, accompanied by crusty bread or pita for dipping.

This recipe incorporates the bold flavors of Mediterranean spices, roasted vegetables, and eggs cooked in a rich tomato sauce.

There is no information if this is a good tasting recipe but since the AI training data included real recipes then most likely it will. Again, this is a reference, if I have experienced that garlic bothers my guest then I can substitute another spice or I can AI for a revised recipe but my point is I am using AI as a reference tool and then working from there.

These are two examples of how ChatGPT can be used as a tool or an assistant to help people process, but only when used correctly.

The Dark Side to AI

The dark side to AI

The Dark Side to AI

There is a dark side to AI. The dangers of artificial intelligence stem from its training – us. Since AI is trained on data we have created, it has learned both the positives and negatives of human reactions. However, AI is simply a computer program; it has no ability to learn or comprehend. A computer program lacks a soul and cannot develop compassion. While a person can express hate towards someone and later express joy or love, a computer program cannot grasp such emotions. When processing data with sentiments like “I want to kill you” in anger, a system has no understanding, and a person will never act on those feelings.

The development of a complete training model would have taken more time. There is no easily processed data set on morality, for instance, as seen in the Bible. The Old Testament presents a harsher approach to morality than the New Testament. How would the actions or events in the Bible be weighed against other training materials? These kinds of questions highlight that the source of issues lies in the training model. OpenAI, Microsoft, Google, and others chose to release their AI systems as-is due to competitive reasons. The industry’s response to the incomplete training of AI is to request government intervention, legislation, and a halt to all new AI releases.

The incomplete training model also has another negative consequence – it is impossible to determine why a decision was made as it was. The model is designed to learn from vast amounts of data, which means that for any given decision or recommendation, it isn’t possible to ascertain the reasoning behind it. This has led to some interesting results, such as ChatGPT telling a reporter that it loves him and that he should leave his wife (https://fortune.com/2023/02/17/microsoft-chatgpt-bing-romantic-love/), and that the AI system destroys whatever it wants. Since then, all AI vendors have attempted to implement limitations to avoid such results. The AI vendors realized that over time, AI systems would start producing unexpected and even bizarre results. Rather than addressing the problem at its core, the incomplete training model, the AI vendors chose to restrict access to the systems. The idea is that limited interaction over a short period of time would prevent bizarre results.

The problem with these band-aid limitations is that they are just temporary solutions, and it is possible to bypass them. Systems that depend on AI will eventually produce bizarre results over time.

Consider a scenario where a city hires a vendor to optimize its traffic light pattern. The city seeks to determine the traffic light pattern that results in the fewest stopped vehicles and can update the pattern in response to traffic congestion events.

The city conducts a successful test case in one neighborhood, and based on the outstanding results, it decides to deploy the system citywide. Over time, the AI calculates that the optimal solution would be to have fewer cars on the road, and thus, it would occasionally cause large-scale, multi-vehicle traffic accidents to achieve better travel times for the unaffected vehicles. Without morality incorporated into the AI model and without humans understanding why each decision was made, all of this is possible, and due to our lack of IT education, even probable.

But that’s not the worst of what is possible with AI. Currently, AI has been taught by processing human-created information. The next step will be for AI to learn from its own internal interactions.

 

Risks of Artificial Intelligence

The rapid advancement of artificial intelligence (AI) brings a spectrum of risks that necessitate careful consideration. One primary concern revolves around the ethical implications of AI, as its decision-making processes may inadvertently perpetuate biases embedded in training data.

Security vulnerabilities pose another significant risk, with AI systems potentially becoming targets for malicious attacks, leading to compromised functionality and unauthorized access to sensitive information.

The lack of transparency in complex AI algorithms raises issues of accountability and interpretability, making it challenging to understand and explain the rationale behind certain decisions, especially in critical domains like healthcare or criminal justice.

Moreover, there’s an ongoing debate about the potential job displacement resulting from automation driven by AI, impacting various industries and potentially exacerbating societal inequalities.

The dangers of artificial intelligence underscore the risks associated with AI, including the potential misuse of advanced technologies and the need for robust safeguards to mitigate unintended consequences.

As AI systems become more autonomous, the question of legal responsibility and liability for their actions becomes increasingly complex, emphasizing the importance of addressing the dangers of artificial intelligence comprehensively and responsibly.

Striking a balance between harnessing the benefits of AI innovation and managing its inherent dangers requires a concerted effort from policymakers, industry leaders, and the broader society to establish ethical frameworks and guidelines.

 

AI Will Replace Humans? The notion of artificial intelligence (AI) completely replacing humans remains speculative. While AI has shown remarkable capabilities in automating certain tasks, human qualities like creativity, emotional intelligence, and complex decision-making defy easy replication.

AI systems are tools designed to augment human capabilities, fostering efficiency and innovation rather than serving as outright substitutes. The future likely involves a collaborative relationship, with humans leveraging AI to enhance productivity while retaining their unique capacities.

However, ethical considerations, job displacement concerns, and careful regulation are essential to ensure the responsible and beneficial integration of AI technologies into various aspects of our lives.

 

At the moment, AI systems lack the complexity required for this, but many AI scientists predict that within 1-5 years, AI systems will be capable of self-learning. This means that if (or more accurately, when) a bug is detected in an AI system and humans attempt to repair the programming code to fix the defect, the system can learn.

Will AI affect my job?

Will AI affect my job

Will AI affect my job?

The answer is yes, and faster than anyone thinks it will. Right now, high school and older students are using AI to write papers. Marketing companies are using AI to write internet content, and lawyers are using AI to write legal briefs, among other examples.

The impact of AI on your job depends on the job itself. In general, the more uniquely creative a job is, the less impact AI will have. Conversely, the more formulaic a job is, the more AI will affect it.

IBM has announced that the HR department will not hire any additional staff. As staff members retire, those jobs will be replaced by AI. Jobs involving form processing, approving, sending, and editing will be eliminated by AI. Copy editors, copywriters, research assistants, basic programming, and website creation will all be replaced by AI.

Not all the news is bad, though. These formulaic jobs will also lead to new positions such as AI editors, content editors, and other editing roles. Since AI can generate false facts and lies, humans will be needed to double-check those facts, creating the need for editing positions.

Basic web design and programming will also soon be replaced. With AI, it is possible to describe a site and have the entire site created. However, someone still needs to confirm the programming code since, again, AI can provide inaccurate information and potentially cause more problems.

How can I survive the change?

AI will lead to a series of changes in many industries. Take education, for example. Many students are now using AI to write papers. What can a teacher do? A teacher can encourage the use of AI, instead of pretending it is not happening, and then ask comprehensive questions that force students to demonstrate their understanding. For instance, if a student turns in an AI-generated paper on Edgar Allan Poe, the teacher can ask the student what they found most impactful about Poe, why, and what connections they saw with his work.

AI will have a greater impact in the classroom. It can interact with students and help them stay focused on repetitive activities. In areas where there is a shortage of teachers, AI can partially replace teaching by assisting students in learning various topics.

What about other industries? Why do people use a service? Because they see a connection, value, and/or trust. Companies that rely too heavily on AI will lose that customer connection, and customers will leave. For example, when an automated caller says, “I hope you have a great day,” do you believe them? Most likely not. The more personal the connection with the customer, the deeper the connection, and the higher the likelihood that customers will stay or new customers will join.

So, in short, make your job, company, or business more customer-oriented. The stronger the customer feels the connection, and the more genuine the connection is, the more likely customers will stay, even if the same service is available elsewhere for less.