Student Case Study on Leveraging AI effectively for education.

As an Engineering student on a Level 3 BTEC, a significant portion of my studies involves processing, understanding, and writing text containing engineering insights and data.

The biggest processes involved in my studies at the moment are the following:

  • Drafting content for assignments
  • Extracting information from specific sources
  • Quickly researching specific data and sources.

Drafting templates for my assignments

The majority of my time is spent writing or adjusting assignments. This is admittedly not ideal, as part of an engineering student’s education is getting a profound and nuanced understanding of the underlying engineering and scientific principles, not creating well-written essays.

As it stands, a lot of students may have a very reasonable understanding of the underlying principles but end up getting limited in the grade achieved because they are expected to write in a certain way, this way requires time and exposure to develop, and many of us have not had exposure to the right type of materials with the tone desired in said assignments.

Generative AI technology potentially helps resolve this issue. I have noticed that despite my initial expectations of it only contributing to making me ‘lazier’ and more reliant, Generative AI when used with intention, has aided in developing my drafting and professional writing skills further through repeated exposure and contrast rather than making me reliant on it. Every time I use generative AI or Grammarly (Which is still a type of AI often overlooked), I don’t just get a hollow one-time result, I get a little extra exposure to better writing and this helps me passively improve my own writing quality as my own brain gets trained by the AI’s suggestions.

The key here is intention. We can easily imagine the outcome if we thoughtlessly ask AI to generate text for us and without a second thought move on to the next shallow task.

What I’ve noticed is that when AI is used as a sort of booster for our own mental bicycle, it teaches us to paddle faster. But if we allow the booster to do all the work and completely stop pedalling, only then does our strength start to fade, and we become reliant on the booster.

At the moment the best tool in terms of generating drafts seems to be Claude’s Sonnet model. Both in terms of the writing quality and user experience when working with text generation.

Extracting data and meaning from large sources

Engineering, if done right, involves absorbing a lot of information and data from a wide range of sources, for me this includes textbooks such as Structures by G.E Gordon and Thinking in Systems by Donnella Meadows, YouTube videos from creators such as Khan Academy, The Engineering Mindset, and CrashCourse, as well as an endless stream of Wikipedia articles. But this is very time-consuming, and sometimes we need to quickly extract specific information from a larger context we are mostly familiar with.

A notion which is quite popular in pedagogy is the forgetting curve, the idea that we lose a fraction of the information retained every set period of time. If the notion that we lose around 70% of what we learned in a week is true, the ability to quickly re-expose ourselves to relevant data without having the friction of parsing through an entire book or video to find it should, in theory, help us fight the forgetting curve easier.

[ imagine I embeded This image “Ebbinghaus Forgetting Curve, Graph from Purdue University”]

Google NotebookLLM is the tool I use for this. When using it I am often left in awe of how far computer intelligence has come. As English mathematician and philosopher Alfred North Whitehead once said:

Civilization advances by extending the number of important operations which we can perform without thinking of them.

And the ability to have a computer parse through a thousand pages worth of information and return us an exact insight or piece of data with over 96% accuracy in mere seconds, is nothing short of exhilarating, but I digress.

Google NotebookLLM, at the time of writing, is completely free to use and allows users to add text, YouTube videos, web pages, PDFs, and so on as sources, and then allows the users to use what seems to be a specifically trained version of Gemini to extract information out of the source.

I have used it both for extracting specific ideas or data such as key material properties, as well as large overall insights on the source which may not be immediately obvious. As with all other AI tools, I would strongly discourage using it with blind faith, treating it instead as a really smart, but somewhat clumsy classmate who is willing to do a large variety of tasks for you at astounding speeds, but sometimes at the cost of reliability. The discretion needed to understand when to trust the AI and when not to is one that is developed over usage, as well as your own expertise on the specific field being discussed.

When it comes to learning engineering concepts. NotebookLM is extremely useful, as it empowers you to further dissect excellent sources of information easily for more insights. The most important part of using it is making sure you have good sources, as the model is somewhat only as good as the source you give it. General AI chatbots can give you information out of their large datasets, but from my experience it’s better to have an excellently written source, ideally approved by your tutors, or commonly used in your desired industry and location. I’ve personally had tutors mark my work down just because I used terminology which was too American.

The tool also has a popular feature that allows the user to generate a short podcast-like audio which digests the information and talks about it like two humans discussing it on a podcast. What’s interesting about this feature is that it oftentimes displays an impressive ability to read between the lines and make connections between ideas that aren’t obvious at all, sometimes giving insights I had not even considered. The use cases for this deep insight-finding ability are vast, but some of my favourites are:

  • Feeding it a large set of Wikipedia pages on philosophy to get a more anthropomorphised view, particularly on ontology.
  • Feeding it personal journal entries, asking to see recurring underlying patterns and insights (There may be privacy concerns regarding this, as you are feeding sensitive information to a large organisation. I would suggest getting a good understanding of the underlying technology and Google’s policies if you are concerned)
  • Simply generating a comprehensive analysis of a complex topic expressed at any level of complexity, ranging from elementary student level to graduate level.

Researching and verifying data for assignments

Perplexity AI has been my best friend this academic year for getting a large pool of potential sources to parse and utilise when working on my assignments. One of the hardest assignments I had was developing a draft for Engineering projects intending to improve the sustainability of our city. This particular assignment required a lot of research and answers to very specific answers that I could not risk having an AI model hallucinating on. It also required citations for every claim.

Perplexity AI solved both issues since, unlike the traditional LLM, it primarily based its responses on sources it found and ‘interpreted’ from the internet. Because it was responding with these sources at the forefront of its attention mechanism, the likelihood of a hallucination was cut down drastically and on top of this, the UX on the interface is extremely convenient, as with every response I would get a list of articles and PDFs from which the information was coming from.

I made sure to read through any which seemed relevant, and I ended up finding sources very similar to what I was hoping to produce, giving me the exact figures I needed, one after the other. I’ve had similar occurrences since, and find this serendipitous side effect to have surprisingly big implications.

Finding relevant and reliable sources of information is such a vital part of creating new content and documents, and this approach to gathering them is much more helpful than the traditional browser, as browser search algorithms are (at the moment) more keyword-based, whereas LLMs have a better ability to search based on connections between ideas, much like our brains, thanks to the deep neural networks powering them. When searching for a more general topic, keyword-based searching is likely to give us ample high-quality resources, but when as engineers or researchers we try to find information on more obscure topics, more relevant and useful documents may get buried under less relevant SEO-optimised sources.

Normally we would overcome this by opening each document up and skimming it, but with perplexity and similar AI models, we no longer need to do this as it is done for us by the model, and then served to us with the important information already extracted.

A criticism of this approach may be that having information served to us like this neglects the wider context it may be nested in, but I would point out that we still have access to the sources, and we still have the ability (and responsibility) to read the original source if we deem it necessary.