Our Ideas


Generative Storage

One of our research group members is currently investigating the concept of Generative Image Storage, which aims to develop a neural network model capable of reproducing exact training data without any loss. Traditional neural network approaches focus on learning the general patterns of a dataset to perform tasks such as classification or image generation from prompts. However, these models are typically designed not to replicate the original data verbatim, as doing so can lead to poor generalization on unseen data.

Recent advances in large language models, image generation, and video generation have highlighted the remarkable ability of neural networks to perform compression, reducing terabytes of data to just hundreds of gigabytes. While conventional compression methods can preserve images losslessly, the resulting file sizes remain relatively large.

In this context, our team member is exploring a novel architecture aimed at creating a lightweight neural network optimised for fast retrieval and efficient learning, potentially enabling exact generative reconstruction of training data with minimal storage requirements.

The Initiation of Thought

Reframing Artificial Intelligence beyond the probabilistic prediction algorithm.

A Conceptual and Idealogical Framework for Thought-Generating Machines

Current large language models (LLMs) excel at pattern recognition and probabilistic sequence prediction, but they lack a fundamental ingredient of human cognition: the initiation of thought. While human minds continuously generate spontaneous internal activity—micro-noise, subconscious evaluations, emotional modulations—LLMs remain inert unless prompted. This idealogical evaluation proposes an ideological and conceptual framework for a new class of artificial systems: Thought-Initiating Machines (TIMs). Drawing from diffusion models, neuroscience, and cognitive science, we argue that endowing models with an intrinsic loop of stochastic impulse generation, persistent internal state, and self-refinement could transform predictive engines into genuinely reflective, internally active systems.

Read more