The encryption has been on a long trip, transcending centuries, from leather scrolls to algorithms resistant to quantity. But if we observe the security requirements for companies 25 years ago, the world was very different. I sold encryption software in 2000 when no one apart from the government knew it needed it. It was free for home or personal use, so trying to sell encryption in a world where users borrowed it was difficult.
One of the most notable examples at that time was OpenSSL, an open source project that provided free encryption tools to ensure online communication. Another popular tool was PGP (quite good privacy), which had free and commercial versions. The free version was widely used by privacy defenders, journalists and technology experts to encrypt emails and files.
However, eventually, public key cryptography and tools such as PGP began to earn traction by secure email and online communication. It was a fundamental moment as the encryption went from being a good of being is essential for privacy and online trust.
Co -founder and Coo, Vaultle.
Unlock encrypted data without compromising security
Totally homomorphic encryption (FHE) has also been in the mixture for approximately 15 years. But it has been labeled as too complicated, which requires too much processing, too much space on the disc, being too slow and more.
But we have seen an advance in the FHE by which customers use it not only to encrypt data, but also to consult, decipher and use them. You can also look for data faster when it is encrypted than when it is not!
Without having long explanations about how integers work (the raw material that makes the encryption possible) and cryptographic algorithms, in simple terms, it is only the user who has access and can understand and read the data. With regard to any other person, the data is just garbage.
Why is this so important? If we observe how the encryption tools work today, we create data, which we then encrypt. Every time we look, every time we move, it deciphe ourselves, because that is the only way to make the data work for us. Once we have deciphered it, we must encrypt it again to ensure that it is safe. Therefore, we have multiple contact points where the data, which we tell users are encrypted, are not encrypted.
This could be fine for the data at rest, but once it migrates data from A B, in many cases, we send the encryption keys with it. This is the equivalent of sending the keys to the castle with the castle. In addition, at the time the user wants to do anything with that data (which could be said that it is more useful) that all that protection has left.
Therefore, so that we can allow customers to use data, while it still has a security network around them and ensure that the data remains confidential is a great advance. The most important thing, we have no encryption keys that travel with the data; The keys are generated at the login point.
Is it the start for the answer?
Many in the industry advocate starting the FHE as an alternative solution. This is an intelligent trick that allows the encrypted data to be updated so that they can be used in the calculations without becoming too noisy to decipher. But it is important to recognize both his promise and his challenges.
Made correctly, Bootstrapping has significant potential if it is implemented efficiently. It could unlock powerful capabilities in the calculation, especially in domains where data sensitivity is essential.
However, the start is still computationally intensive. Even with optimized schemes, it can take seconds per operation, which means that it cannot be used for real -time applications. Starting implementations are highly complex.
That said, directly dismissing the starting can be premature. As research and engineering efforts continue, including ours, there are reasons to believe that the most efficient and scalable implementations are available.
We live in a data based on data, and the FHE will be the key to allowing access to data and the use of technology such as AI tools. However, I recently read an article in which Elon Musk discussed the fact that we are running out of real world data to train AI models.
This idea (often called “maximum data”) suggests that the Internet no longer offers enough fresh and high quality data to improve large models of AI significantly.
In other words, we will reach a point where the data will not be developed in a way useful for organizations due to all the rules that we put around them, some of which are self -imposed, others that regulators require.
Many leave caution aside
Many regulations are vague, which means that organizations will interpret them in a way that allows them to reduce the risk by adopting a more cautious approach. Other companies show caution in the wind. We are seeing that it develops in real time.
Take a goal as an example. The company has faced multiple fines. The most notable in 2025 is a fine of 200 million euros of the European Commission under the Digital Markets Law (DMA). The fine was issued because the “payment or consent” model of Meta, where users had to pay access without advertising or consent for data monitoring, violates the rights of users to a genuine choice.
The regulators argued that this configuration pressed the users to renounce their data, undermining the objective of the DMA of a fair digital competition.
Improving lives
I would say that we must make the data more usable, particularly where data helps make the world a better place. Clinical trials, for example, have faced challenges due to disparate data, which hinder progress. If a test is quite niche, the organization must be able to obtain a decent data set to perform tests and analyze the results in a significant way.
We are helping organizations to maintain the data in the jurisdiction in which it is supposed to be and still collaborate without moving the data and without the text data without format flying through the Internet. This means that we can improve people's lives; We can stop people who die of diseases that we could prevent because we understand more.
Of course, we want to use data in a way that still respects privacy. This is where we see that it is a facilitator. We can have ia using encrypted data to make better decisions without affecting the underlying owner of the data from a user level. It is a new and brave, but exciting world.
We list the best data recovery software.
This article was produced as part of the Techradarpro Insights Expert Channel, where we present the best and most brilliant minds in the technology industry today. The opinions expressed here are those of the author and are not necessarily those of Techradarpro or Future PLC. If you are interested in contributing, get more information here: