In his research, Professor Marko Huhtanen from the University of Oulu, who specializes in applied and computational mathematics, introduces a new method for compressing images. This technique combines several well-known compression methods, leveraging their best features. The study has been published in IEEE Signal Processing Letters.
JPEG is the most commonly used file format in digital photography and image storage. Many photographers also save images in RAW format, allowing for more versatile post-processing. Depending on the application, JPEG may retain only 10%–25% of the information available at the time of capture. Whether the lost information is significant depends on the viewer.
This is a universal issue in the digital world—affecting everyone who takes and sends image files. Converting an image into a transmittable format is relatively simple.
“We don’t see a perfect image because the amount of information is infinite. So we must compress and retain only the essential, sufficient data. This is done mathematically in a way that must also be algorithmically fast,” Professor Huhtanen explains.
Rethinking compression
In Huhtanen’s method, the image is operated horizontally and vertically, mathematically by using diagonal matrices, so that the image approximation is built layer by layer. The process resembles a simplified version of Berlekamp’s switching game, but in a continuous form.
“Image compression is a fundamental problem in imaging—how to pack an image into the smallest possible space for fast transmission and sharing. The original image takes up too much space in computer memory, so we aim to preserve only 10%–25% of the image’s information.”
Current JPEG technology is based on an algorithm developed about 50 years ago by Nazir Ahmed, American professor of electrical and computer engineering.
“He wanted to base compression on principal component analysis (PCA) but could not implement it algorithmically. He compromised and created a simpler method using the discrete cosine transform (DCT). He applied for research funding, but this proposal was rejected because the idea was considered too simple to be interesting,” said Huhtanen.
Despite this, the results were published, and over time, the discrete cosine transform became a standard in image compression.
“Scientific publishing involves a lot of randomness, and it is hard to predict what will ultimately be considered as significant. And, as in this case, significance is also relative.”
Comparing JPEG, DCT, and PCA approaches
The goal of compression is to discard as much image data as possible without the human eye noticing any difference between the original and compressed image. “JPEG is a simple technique: the image is divided into 64 parts, and each part is compressed using the discrete cosine transform. Mathematically, it is not very interesting, but in practice, it works excellently.
“Ahmed’s original idea, PCA, was sidelined in image compression. It was considered too labor-intensive and rigid to develop further. These two approaches have lived separate lives. In my research, I managed to remove this rigidity, allowing the ideas to be mixed and the best aspects of both to be utilized. In other words, DCT and PCA are not algorithmically isolated from each other,” said Huhtanen.
Huhtanen does not speculate on the applicability or spread of his ideas but notes that he has solved a problem that has not seen much progress in a long time. A broad family of algorithms has been developed, with PCA being just one special case. The best application areas remain to be seen.
Understanding PCA and digital ‘negatives’
What does PCA mean in image compression? “Tim Bauman’s website demonstrates how an image becomes clearer as more information is included. At some point, the eye no longer perceives a difference, even though the amount of information increases. This is the compression technique originally envisioned by Ahmed. It can be implemented based on the algorithms developed in the late 1960s.”
Those who used film photography remember negatives. Using this analogy, digital image compression can be seen as converting an image into a “negative,” from which the necessary parts are extracted and transformed into a visible image. The recipient receives a “negative form,” which is then rendered into a visible format.
Benefits for speed, storage and energy
We’ve all experienced slow internet connections and watched images or websites load gradually on the screen.
“Individual components arrive through the channel, and the image sharpens as the compression is decompressed. If this can be done better than it is currently, image transfer speeds up and more information can be transmitted. A digital image consists of pixel rows, which are numbers. Cleverly reducing the amount of data is a key issue,” said Huhtanen.
Huhtanen’s method enables compressing images into a smaller amount of data, saving storage space and speeding up transmission. Computation becomes faster and lighter, and the method is well-suited for parallel data processing. Images can be built up in stages, allowing for more precise control and adjustment during compression. It also saves energy.
More information:
Marko Huhtanen, Switching Games for Image Compression, IEEE Signal Processing Letters (2025). DOI: 10.1109/lsp.2025.3543744
Citation:
Image compression method combines classic techniques for greater efficiency and flexibility (2025, November 13)
retrieved 13 November 2025
from https://techxplore.com/news/2025-11-image-compression-method-combines-classic.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

