Tag Archives: AI

The role of back propagation in a machine learning algorithm

By: Ivy Dahl

Photo by Google DeepMind on Pexels.com

An algorithm is a series of steps to complete a task.

A machine learning algorithm is a computer program composed of many individual, yet interconnected parts that perform a series of steps involving numbers and the guided transformations of them to achieve a specific goal, without being given an explicit direction. Back propagation is one of those parts. It’s a journey from one number to another, calculating the error of a specific guess.

Imagine a bunch of data points on a graph, given to the back propagation algorithm by some previous part of the general machine learning algorithm, all spaced apart so that a line would have to be placed along them to connect them. Approximating a line that would fit all those points evenly, finding the medium, is a difficult task. An objective numerical way is the squared distance between a given curve and the data points. This is called a loss. The fitted curve with a high loss suggests the data points are far from the currently generated curve indicating a bad approximation. Low losses indicate a better fit as the currently generated curve closely aligns with the data points, making it a more accurate approximation.

A function could represent a line.

But a constant function wouldn’t fit this line by itself. Neither would an exponential function. But what if you put them together? And added another? You can think of this collection of functions like a toolbox that is already equipped with these functions, but still in need of a perfect coefficient to bend it to fit the line as close as possible. The goal is to find the combination of coefficients that outputs the best fitting line. The best fitting line is defined as the line with the combination of k’s that yield the lowest loss. Each k represents a random number.

Y(x) =k(0) + k1x + k2x + k3x + k4x

X constructs a curve, Y calculates the loss function of it and outputs one number, the loss, then plugs these numbers back into the general equation for the curve. Now it can start going through the numbers in order to find the minimum loss. This will lead to finding the distance between data points and the current generated curve repeatedly.

But can we make this prediction without going through countless trial and errors? Differentiability is good for fast finding of optimal number combinations. We do this by focusing on one random k coefficient on a coordinate graph. We are trying to find which value of k results in the lowest loss individually.

Knowing the local behavior of a function only, we are blind to all other points. Adjust x. This new adjusted input of x will result in a new value for y which will retain the same amount of change as x was changed. There will be a line connecting x and y and when you take smaller steps it will continue to more accurately align with the optimal line configuration, creating a tangent line between the x point and y point.The steepness of this tangent line is its rate of change, indicating a change in the curve that it’s aligning with.

Now let’s say you do this for every one of the coefficients individually. This gives multiple sets of these slopes that can be compared to the larger set of complex slopes coming together to form a line. These two sets of slopes will coincide. You might not be able to recreate a complex line like this, but if you break it into smaller slopes, it’s easier to approximate the individual slopes to the original, and put them all together once done to recreate the line with the lowest loss.

This is a brief description of how back propagation is an integral part of a machine learning algorithm by taking a specific set of inputs from somewhere else in the algorithm and creating the best fitting curve for it, so that curve can be passed forward to another part of the machine learning algorithm.

The real reason some students use AI

By: Nathaniel de Sam Lazaro

In the educational world today, much is made about artificial intelligence (AI) and how it could be used by students to cheat in their classes. 

By using AI, students are able to create work with far less effort than if they did it themselves, creating a well warranted fear amongst teachers that their students may be using AI to cheat by passing off AI generated work as their own. Not only is this academically dishonest, it actively hinders the student’s education. This fear among educators has been exacerbated by the increasing difficulty of differentiating work made by AI from that made by students.

Personally, I have never used AI to cheat, though I will admit to having intrusive thoughts of doing so from time to time. This thought comes up whenever I am struggling with an assignment, telling me that I could perhaps earn the same grade more easily, without doing any work. 

The reason I’ve never given in is not out of a fear of being caught or some compulsion to blindly follow rules, but because I know that by doing so, I would be hindering my own education. I wouldn’t be gaining the knowledge or skills that I would have if I were to do the assignment myself. This mentality is what has prevented me from giving in to this urge. But this made me realize that students who do use AI are doing so because they don’t value their own learning, only valuing the grade they earn in the class.

I was curious about what other people’s mentalities toward this were, so I asked some of my friends the following question: “If you could have AI do some or all of your assignments, knowing that there was no chance you would ever be caught, would you do so, and why?” Their responses intrigued me. 

Some people told me that no, they wouldn’t use AI in any cases, because they wouldn’t learn without doing so. This is similar to my mentality, and everyone seemed to agree with this in at least some of their classes. Some friends of mine stated that they would do so for certain classes, since they thought that those classes’ material didn’t matter for their planned careers. In other words, they saw no value in these classes other than to pass them, moving them closer to graduating. 

How does this happen, that these students don’t see any value in their learning except to pass a test, to get good grades, to get into college, and to start their career? They aren’t born this way. Young children have an endless amount of questions on their mind. They are naturally curious, ask deep questions, and annoy their parents by constantly asking “Why?”

By contrast, High School students only seem to see school as a transaction to get to where they want in life, and gain the information taught for the purposes of passing a test and getting an “A”. What happens that takes this natural curiosity out of our students between when they start elementary school and when they get to high school? What drives these students to use AI to pass their class, while they are missing out on learning and skills they would gain by doing their work on their own, missing out on the answers to the deep questions they asked when they were a child?

I think the answer is that our education system, and society as a whole, emphasizes grades above all else. If I could use AI and get the same grade, why not do so? Yes, I would be missing out on learning, but that doesn’t matter if all that I care about is my grade. In the above hypothetical, there are no consequences for using AI. One could get an A without doing any work on their own. In order to choose not to do so, a student would need to value learning, knowledge, and curiosity in and of themselves in order to get the most out of their education. Unfortunately, what is usually valued most by parents, educators, colleges, and society as a whole, is simply a letter on a piece of paper or computer screen. And in many cases, the allure of that letter grows to outweigh or even destroy any value one may have placed on what it is supposed to represent. 

AI hallucinations: Why and how AI makes mistakes

By: Karl Salkowski

Photo by Miguel u00c1. Padriu00f1u00e1n on Pexels.com

It is very well known that AI isn’t perfect; even simple questions can confuse AI, leading it to spread misinformation. These mistakes have become so common that they have earned their own moniker, “AI Hallucinations”.

Some of these famous “Hallucinations” include using glue to stick cheese to pizza, misspelling words, and the health benefits of eating rocks.

AI formulates answers based on data from all over the internet. There is no way to filter correct information from misinformation, so AI is trained on everything. When AI answers your question it attempts to replicate data it was trained on. AI makes observations on the data it was trained on and forms its answers based on the patterns it finds.

Along with the inability to tell correct and incorrect information apart, AI also has biases regarding race and gender. According to Bloomberg.com when AI was asked to generate pictures of people in different professions, the photos it generated showed harmful biases. For example, AI generated more photos of people with lighter skin tones in higher paying jobs, and more people with darker skin tones in lower paying jobs.

AI also generated more perceived men than perceived women in many high paying jobs including: engineering, architecture, CEO, Doctor, Lawyer, and Politicians. Out of the 100 images AI generated for the prompt “Engineer” only two were not of perceived men. 

There are large disparities between the images AI generates and the people in those careers. These images construct a distorted viewpoint of the world. According to Bloomberg.com, by 2025 30% of marketing materials by large corporations will be created using AI like this.

These mistakes and biases will reinforce a lot of these stereotypes. The misinformation AI spreads can undermine education and people’s world views if left unchecked. If people stay poorly informed on AI then there will be negative repercussions in society.

For more information, please go to:

The pros and cons of AI according to AI

By: Samson Belihu

Note: I used the App Chat & Ask AI to generate most of the information included in this article.

Pros:

According to Chat & Ask AI, some pros to using AI as a student include:

  • Learning often extends beyond classroom hours, and students may have questions when their teachers are unavailable.
  • AI powered tutoring systems can provide instant assistance through virtual tutors that help with homework or clarify concepts. This encourages independent learning and ensures that students receive help when they need it.
  • AI is Multilingual: You may ask, how is it that AI can understand and generally text in multiple languages, such as English, Spanish, French, Chinese, and etc. and you may also ask why this matters? This matters because it allows people from different countries to communicate effectively, breaking down language barriers.

Cons:

In recent years, artificial intelligence, such as AI has become an integrated part of the education landscape, offering many people countless resources to help students learn and succeed. However, with “great power comes great responsibility”, and many students find themselves misusing AI in school in ways that can undermine their education and academic integrity.

When typically using AI you would want an easy access to info and that’s fine, but there’s an issue with AI in that it doesn’t have any access to the original sources, which is crucial for your info. AI’s are generated based on a mixture of licensed data (that the AI programmers may not actually have a license for), data created by human trainers, and publicly available information up until some training was cut-off in October 2023.

So, AI is not as accurate as you may think it is, so before using the info I gave you make sure to double check it. In closing, I asked Chat & Ask AI: “Give me cons of Chat Ai and target it for students” and this is what it came up with:

As an AI language model, I don’t have access to external sources, databases, or the internet in real-time. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available information up until my last training cut-off in October 2023. This means that while I can provide information, insights, and summaries based on that training, I cannot cite specific sources or provide direct references to articles, studies, or books.

Here are some key points to consider regarding my lack of sources:

1. **Training Data**: I was trained on a diverse range of texts, including books, articles, and websites, which helps me generate responses based on patterns and information found in that data.

2. **No Real-Time Access**: I can’t browse the web or access current events, databases, or any live content, so I can’t pull in or verify information from sources outside my training.

3. **Generative Nature**: My responses are generated based on patterns and knowledge from the data I was trained on, rather than retrieved from specific sources.

4. **No Citation Capability**: Because I don’t access or reference specific documents, I cannot provide citations or bibliographies for the information I present.

5. **Encouragement to Verify**: For academic or critical inquiries, I encourage users to verify information through reliable sources or databases to ensure accuracy and credibility.

If you have specific questions or need information on a particular topic, feel free to ask! I can provide insights based on my training, but I always recommend cross-referencing with authoritative sources for critical topics.