Skip to Main Content

Generative Artificial Intelligence (AI) for Students

This guide provides a gneral overview of AI knowledge, tools, and content for student research and and use.

What is AI?

In 1956, at The Dartmouth Summer Research Project on Artificial Intelligence, Dartmouth assistant professor John McCarthy, coined the phrase "Artificial Intelligence".

S. L. Andresen, "John McCarthy: father of AI," in IEEE Intelligent Systems, vol. 17, no. 5, pp. 84-85, Sept.-Oct. 2002, doi: 10.1109/MIS.2002.1039837.

Abstract: If John McCarthy, the father of AI, were to coin a new phrase for "artificial intelligence" today, he would probably use "computational intelligence." McCarthy is not just the father of AI, he is also the inventor of the Lisp (list processing) language. The author considers McCarthy's conception of Lisp and discusses McCarthy's recent research that involves elaboration tolerance, creativity by machines, free will of machines, and some improved ways of doing situation calculus.
keywords: {Artificial intelligence;Computer languages;Laboratories;Logic;Humans;Mathematics;Computer science;Time sharing computer systems;Mathematical programming;Friction},

URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1039837&isnumber=22293

HAI - Stanford University Human-Centered Artifiial Intelligence

Stanford University Human-Centered Artificial Intelligence 

Artifical Intelligence Definitions

HAI was founded by Stanford University to advance AI research, policy, practices, and education. You can read the HAI annual report here.

Bias and Misinformation in AI

Systematic errors and prejudices can be present in any AI-generated content. Biases can be seen in several critical ways.

1.    Training Data Bias

  • If care is not given in the training process, training models may contain historical or societal biases
  • Training data may reflect historical stereotypes that may be reflected in the AI's generated content.

2.    Representation Bias

  • Some groups may be underrepresented in the training date
  • Groups mat also be misrepresented in the training data
  • Underrepresentation and misrepresentation can lead to less accurate outputs for these demographic groups
  • Less detailed and less sophisticated AI results may also be generated about these groups.

3.    Language Model Bias

  • Large language models (LLMs) might generate results at reinforces stereotypes
  • LLMs may also encode stereotypical associations that do not exist
  • Model bias may show up in text generation but also in image generation, audio generation, and video generation

4.    Algorithmic Bias

  • Model architecture can also create unwanted prejudices, as can the mathematical algorithms in the LLM
  • Black box decision making may be difficult for the designers on the model to understand
  • Early model design decisions can amplify biases

5.    Strategies to help avoid bias

  • Having diverse training data
  • Auditing AI-generated outputs
  • Designing algorithms that detect bias
  • Transparency in the model development and data curation processes

Misinformation

Issues related to misinformation can be a problem for GenAI and AI output. 

  • Hallucinations are the term that is most often used to describe false information created by the AI system to defend its Hallucinations are the term that is most often used used to describe false information created by the AI system to defend its statements. Often this misinformation created by Hallucination can be persuasive to users.
  • Some AI tools allow users to create Deep Fakes. The most persuasive Deep Fakes are created using AI audio and video tools. Unlike Hallucinations, most Deep Fakes are created by humans for the purpose of tricking people into believing in some form a propaganda.
  • Output presented by GenAI systems may be out of date. It may lack currency because the AI model has not trained on new or updated data.

Recent Articles on AI and Generative-AI (GenAI)