Why we need the next generation of digital trust technology - 7 minutes read




Did you miss a session at the Data Summit? Watch On-Demand Here.










This article was contributed by Richard Gendal Brown, CTO at R3.




The connection between human beings is often the first thing that comes to mind when we think about trust. Trust allows us to do things that would be almost impossible if we had to verify everything for ourselves. Imagine if you had to inspect the kitchen of every restaurant you ever visited. The long and short of it is that most of us operate under a system of “if we trust, we don’t need to verify” in our personal lives as well as in business. 




Lack of trust in technology and the digital world




In the early days of the web, you had no way of knowing if your browser really was talking to the company that you thought it was. So, ecommerce and online banking struggled to take off.  But the advent of the browser padlock — literally creating trust that you are connected to whom you think you are — unleashed trillions of dollars of opportunity.




Until recently, firms doing business with each other had no way of knowing if they had the same records. And so, they wasted staggering amounts of money reconciling with each other. Technologies like blockchain are solving this problem. 




But there is so much further to go. For example, when you send information to a third party, you have no technological way to trust them or their technology or know what they will do with your information. So, you have to spend a fortune on ‘data scrubbing’ or audits. Or, more likely, you don’t share sensitive data at all. It’s mind-blowing to imagine how many opportunities to create new value or serve customers better are squandered because we can’t trust how our information will be processed when it’s in somebody else’s hands.  




Consider this list of technology policy issues on the agendas of most developed nations at the start of the 2020s: 




Social networks are accused of misusing users’ personal data for corporate gain. 

Advertisers, and the large technology firms whose platforms display their ads, are accused of tracking users without their knowledge, and of inappropriately combining disparate datasets to violate users’ reasonable expectations that different online behaviors and personas can be kept separate. 

Firms of all sorts are accused of using data they obtained about an individual for one purpose to pursue unrelated business goals, without informed consent. 

Data that firms legitimately capture about users is often stored or processed with insufficiently strong controls, leading to data loss or exposure, by malicious outsiders or rogue insiders. 

Firms frequently wish to share data with other firms but are unable to control this data once it leaves their systems. They fear the resulting liability, and so forego otherwise promising opportunities for themselves or their customers. 

These issues all share a single cause: today’s networked economy requires individuals and firms to share data with third parties or other parts of the same firm on an unprecedented scale, yet today’s technology provides no way to control how that data is then used, or for what purpose. 




The blunt reality is that once you have shared a piece of information with a third party, they can do whatever they like with it. The only things constraining them are ‘soft’ controls: reputation, regulation and contract law. The internet revolution has made it extraordinarily easy and cheap to share information but has provided no comparably powerful tools to control the monster we unleashed. 




The three reasons we share data 




It is as if there are some fundamental computing capabilities that we need, but don’t have. 




Consider some distinct reasons we share information with third parties:  




We often want to “outsource” our computation, using cloud computing techniques. But we’re worried that the cloud provider might misuse our data. What if we could know, in advance, that this was not possible?




Sometimes we are asked to send sensitive documents to third parties so they can verify something to their satisfaction, such as a customer’s age. But that usually means they get access to personal or confidential information: they want to know my age, but I have to give them access to my whole passport. What if we could provide proof without revealing more than we need to?




And we often encounter situations where multiple firms voluntarily use a centralized system – such as an exchange – to facilitate trade, only to discover the exchange operator has privileged insight into the entire market’s trading strategies. What if we could collaboratively pool information without the centralized operator gaining a privileged position?




It may not be obvious at first sight, but the reason data misuse is such a worry in the scenarios above is because all of these problems all have a common cause: you cannot trust somebody else’s computer. 




But what if you could sometimes trust somebody else’s technology? What if we could write applications whose owners cannot tamper with them or observe their execution? What if an application could process data which the operator is not entitled to see yet you could trust the results that are provided at the end?  What if you could validate a sensitive document on your computer and then prove to somebody else that you had done this correctly, without them ever seeing the underlying document? What if you could trade with your counterparts without the exchange operator learning your strategies?




If such a system existed and could be adopted at scale, then each one of the public policy issues listed above could be addressed. Data owners would regain control of their information. They could verify what will happen to their data – and, by extension, what therefore will not happen to it – before sending it for processing. And if somebody else’s computer told them a fact had been verified, they could believe it. 




What’s next for digital trust technology?




The reality is that we will look back on 2022 awed by how much we managed to achieve in the digital realm when the levels of digital trust were so low.




But things are changing. Trust technology is now here. The convergence of blockchains, Confidential computing and applied cryptography is happening, and the most forward-looking firms are applying this to massively increase the levels of trust that exist within and between firms of all sizes operating in the digital realm. 




For example, applications secured by confidential computing can cryptographically prove to a business’s users that their data is encrypted in such a way that nobody, not even somebody with full control of the service, can see it. Trust technology means this can be done in a way that lets the user know when the business logic of the service has been changed. And this proof is provided by the physical hardware that is doing the computations. 




Users both build trust in a business’s good intentions and can also be enlisted as an extra pair of eyes and ears in the fight against a hacker should the unthinkable happen. Real-world users no longer have to trust businesses and counterparties using their data; they can verify for themselves. Confidential Computing, alongside the wider trust technology toolkit, is a clear win-win for all parties and will help drive the next generation of secure digital trade.




Richard Gendal Brown is the Chief Technology Officer at to the VentureBeat is where experts, including the technical people doing data work, can share data-related insights and innovation.




If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.




You might even consider contributing an article of your own!




Read More From DataDecisionMakers

Source: VentureBeat

Powered by NewsAPI.org