We have long sought secure ways to exchange data. Some current methods include cryptography, hashing and requiring the solution of math problems that demand enormous computing power. Quantum computing could render some of our current methods insecure and obsolete, while enabling new methods.
Cryptography uses codes to protect information and communications. Data is encrypted using a secret key. The message as well as the secret key is given to the recipient. The recipient then uses the key to decrypt the message. The problem with cryptography is that if the key is compromised, anyone with the key can decipher the secret message.
Hashing is a protocol used in cryptography. It reveals the integrity of data by creating a ‘digital fingerprint’ of the original data. This is useful for verification (for example, to sign into your e-mail). The e-mail provider compares a hash of your data to a hash of your original data to verify that it is the same. So, the e-mail provider can verify that the person signing in knows the password, without storing your actual password (it stores only a hash of it). To learn more about cryptographic hashing, see my article here. Also see my article on merkle trees, which use hashing extensively.
Conventional key distribution
Current ciphers used to distribute keys use math problems to protect themselves from attackers. The problems are fairly simple to solve but need a large amount of processing power to complete. It’s easy to find the product of two prime numbers, but difficult to factor the product and derive the two prime numbers.
Such conventional key distribution methods will be rendered obsolete by advances in CPU power and the rise of quantum computers.
Quantum Threats to Encryption
Quantum computing will likely be able to break any encryption and hashing algorithms easily. It could also impact the integrity of blockchains. Quantum computing will challenge both asymmetric cryptography as well as hashing algorithms.
Enter quantum cryptography, which may enable the secure exchange of information, even in the presence of quantum computers.
Heisenberg Uncertainty Principle
Quantum key distribution proceeds by using light particles exchanged between sender and recipient to establish a key. This protocol uses logic outlined in the Heisenberg uncertainty principle or indeterminacy principle, which states that: “the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory.”
Quantum Key Distribution
Quantum key distribution addresses the challenges of distributing keys using quantum protocols. It is built based on laws of nature that are resistant to increasing computational power. These are physical processes that are not vulnerable to powerful computing systems.
Quantum Key Distribution is an optical technology which automates the delivery of encryption keys between parties that are sharing an optical link. There are two types of quantum key distribution systems:
How Does Quantum Key Distribution Work?
Quantum mechanics has a core characteristic: in quantum systems, the act of measuring the system disturbs the system. Two parties engaged in creating a key exchange protocol will require:
The transmitter sends the key to the receiver in the form of a stream of light particles, known as photons. These light particles have a spin which can be changed depending on which filter they are passed through. Polarized photons can be polarized in 4 different directions (vertical, horizontal, diagonal, left/right). The recipient measures which direction they are polarized in using two different polarized detectors randomly.
At the end of the process, the recipient will have a key of 1s and 0s. They then get on a call with the sender and compare notes on which detector they used for each photon. They throw out results in which they did not use matching detectors and filters, and keep the ones in which they used matching detectors. They now have a sequence of identically polarized and measured photons — thus forming the final key. Using this key, the recipient can decrypt the message which has been sent to them.
Both types of quantum key distribution protocols face the problem of transmission losses over large distances. Transmission losses increase exponentially as the distance increases.
Quantum Key Distribution holds enormous potential for secure data transfer. Researchers are refining both discrete-variable quantum key distribution and continuous-variable quantum key distribution. Once the distance related losses are reduced, this technology will power the next generation of secure, quantum computer-proof data transfer.
In the 20 thcentury, scientists and thinkers predicted that technology would change our lives fundamentally. Despite our technological achievements, these changes have not materialized. As Peter Theil said, “We wanted flying cars, instead we got 140 characters.”
Transformative new technologies rise, create new sectors and help us achieve phenomenal things. Then, at some point, they stop advancing. Output and productivity gains plateau, often for decades.
‘Tether Theory’ is my explanation for this phenomenon.
Tether Theory: The elements which help to create and establish a disruptive new technology eventually become the limitations on that technology’s further growth and innovation.
The ‘elements’ from the definition could include: founders, investors, established actors in the sector (like VCs, incubators and professional services firms), regulators and various stakeholders.
Per Tether Theory, sectors created by disruptive new technologies go through 6 stages:
Stage 2: Establishing Best PracticesThrough multiple swift cycles of iteration and experimentation, early movers establish a set of best practices and norms for the new technology. The sector’s dynamics are still rapidly evolving at this stage.
In some cases, multiple competing versions of the technology could vie for supremacy, often in platform wars. Some examples include:
Stage 3: StandardizationAs the technology continues to evolve, stakeholders and regulators gain a good enough understanding of it to craft standards for its use. After standardization, light regulation follows. Sometimes regulators create a sandbox to experiment with the technology and with different regulatory approaches.
Standardization gives a clear shape and form to the sector. This enables capital markets to more confidently step in and fund new projects in the sector. Enterprise firms begin to evaluate the technology and consider how to engage with it. A period of excitement, high competition and growth ensues. The mix of capital and competition helps the technology further evolve and improve.
Stage 4: Consolidation & EntrenchmentAs the sector grows and matures, a set of market leaders emerge. These firms wield a high level of influence in the sector and shape its dynamics. They also have large client or customer bases, deep pockets or both. Faced with competition from these incumbents, less competitive or smaller companies either go out of business or are acquired by the incumbents. The sector experiences consolidation.
A group of established stakeholders, such as financial and professional service providers, insurance firms, information providers and dependent industries form around the market leaders. These stakeholders usually crave stability and predictability.
The incumbent leaders of the sector work to retain their dominance. In doing so, they intentionally or unintentionally resist changes to the technology and to the industry’s dynamics.
Stage 5: Increased Regulation & DependenciesAs the technology becomes mainstream, it becomes subject to regulation. In the face of impending regulation, the market leaders, along with freshly formed industry associations, shape the conversation in a way that protects and furthers their interests. Complex regulation favors deep-pocketed market leaders and the professional ecosystem around them makes it difficult and expensive for new firms to enter the market.
However, regulation is a double-edged sword. Regulatory risk becomes a major concern at this stage. Even regulation that favors the industry’s status quo can hamper the incumbents’ further growth. Additionally, over time, the technology becomes subject to indirect regulation such as tax or trade policy. Each layer of regulation adds complexity, and consequently, incumbent attention to compliance.
As the sector matures, its processes develop dependencies with other industries and with established stakeholders. These dependencies further limit the flexibility of the sector’s companies.
Stage 6: StagnationThe sector now operates in a mature, layered regulatory environment and is subject to dependencies with entrenched players in other industries. Thus, it enters a period of stagnation or incremental growth. Innovative ideas must now go through routine, slow and bureaucratic processes. In this environment, experimentation becomes difficult, creativity is stifled and even innovative companies may see their corporate culture begin to decline.
A sector can exist in the stagnation stage for decades. Innovation is limited to tinkering with already-tested ideas. In this risk-averse environment, major technological change is unlikely to occur.
The airline sector is a great example of a sector in stagnation. Commentators examining the sector observed that “travel time across the Atlantic…for the first time since the Industrial Revolution, is getting longer rather than shorter.”
Closing ThoughtsTether Theory suggests that the people and organizations that help create, establish and entrench a new technology eventually prevent that technology from achieving its full potential. While standardization and regulation are important steps for a sector to mature, visionary founders and savvy policymakers should vigilantly protect creativity, experimentation and competition.
Tether Theory does not suggest that the world is not making important technological advancements. These advancements happen all the time and have major societal impacts. However, preventing — or at least postponing — entrenchment and stagnation could increase the beneficial impact of a disruptive technology and make sure that the gains from the technology are shared by a wide variety of stakeholders of different sizes and capabilities.
Edge Computing uses local devices to compute, store and communicate data. Edge computing can therefore be thought of as an extension of cloud services (which allow compute, storage, analytics, and other functions to be executed remotely) to the user’s local devices, speeding up computation and making it more secure.
So, Cloud Computing + Edge Devices = Edge Computing.
Edge DevicesEdge Devices include routers, routing switches, integrated access devices (IADs) and multiplexers. Edge devices enable users to connect to and share data with an external network — such as a service provider, carrier, or enterprise primary network.
Factors Leading To Edge ComputingEdge computing was driven by the following factors:
Cloud is supported by external data canters, which reduces data transfer speed due to network latency.
2. Limited Bandwidth
In addition to the distance, limited bandwidth further slows cloud data transfer.
Cloud data exchange is often subject to the privacy, security and compliance issues of multiple jurisdictions.
4. Configuration of the System
System integrators are required for cloud to function.
These problems with cloud computing lead to edge computing. The use of edge devices decentralizes cloud functions. These edge devices (like routers) act as facilitators to speed up data processing. So, user devices (like smartphones) can decide whether compute or store locally or send edge devices (like routers) data. Edge devices then decide which information they can process themselves and which information needs to be sent into the cloud for processing. So, many (and ideally most) functions can be performed by user devices and edge devices, with the convenience of cloud processing as necessary.
Internet Of Things (IOT)IoT — the phenomenon of embedding everyday physical objects (like thermostats, toasters and washing machines) with chips, software and sensors — enables these objects to be networked and to perform functions with feedback from one another and without human intervention. These objects thus become ‘smart’.
Since IoT requires the creation and processing of enormous amounts of data, cloud and edge solutions are crucial enablers of IoT (especially in use cases that involve sensitive data or Artificial Intelligence).
Take the example of a driver that wants her vehicle to signal to her house’s garage door that she has arrived. The short time period between her arrival and the garage door’s expected response is a challenge for cloud computing. Even minimal network latency or other network issues would remove the usefulness of IoT solutions here.
Enter edge computing. Most of the information in this transaction can be local, so the centralized cloud environment won’t be necessary in most cases. However, if there is an authentication issue, a cloud solution may be required — and edge computing still allows for this. Additionally, even in cases that cloud is required, it would be accessed at high bandwidth. So, edge computing enables multiple IoT cases.
For a further discussion of the IoT applications that edge computing enables, here is my earlier article describing seven use cases:
A word of caution: these privacy and security gains require significant upgrades in user devices themselves. Most user devices (such as microwaves) use dated firmware and hardware, which makes them insecure and provides an entry point into the network for bad actors. Cloud environments, especially those maintained by Amazon, Google and Microsoft, are generally highly secure. Since realizing the gains of decentralized edge computing requires a new generation of user devices, these gains will likely be realized in the medium or long term.
What are Self Healing Systems & How Can You Develop One?
When people get injured, their bodies self-heal. What if technology could do the same?
Companies are racing to develop self-healing systems, which could improve quality, cut costs and boost customer trust. For example, IBM is experimenting with ‘self-managing’ products that configure, protect and heal themselves.
What Is A Self Healing System?A self-healing system can discover errors in its functioning and make changes to itself without human intervention, thereby restoring itself to a better-functioning state. There are three levels of self-healing systems, each of which has its own size and resource requirements:
In typical applications, problems are documented in an ‘exceptions log’ for further examination. Most problems are minor and can be ignored. Serious problems may require the application to stop (for example, an inability to connect to a database that has been taken offline).
By contrast, self-healing applications incorporate design elements that resolve problems. For example, applications that use Akka arrange elements in a hierarchy and assign an actor’s problems to its supervisor. Many such libraries and frameworks facilitate applications that self-heal by design.
Unlike application level self-healing, system level self-healing does not depend on a programming language or specific components. Rather, it can be generalized and applied to all services and applications, independent of their internal components.
The most common system level errors include process failures (often resolved by redeploying or restarting) and response time issues (often resolved by scaling and descaling). Self-healing systems conduct health checks on different components and automatically attempt fixes (such as redeploying) to recuperate to their desired states.
Hardware level self-healing redeploys services from an unhealthy node to a healthy one. It also conducts health checks on different components. Since true hardware level self-healing (for example, a machine that can heal failed memory or repair a broken hard disk) does not exist, current hardware level solutions are essentially system level solutions.
Reactive Versus Preventive HealingReactive Healing
Reactive healing is healing in response to an error and is already in widespread use. For example, redeploying an application to a new physical node in response to an error, thereby preventing downtime, is reactive healing.
The desirable level of reactive healing depends on how much risk a system can tolerate. For example, if a system relies on a single data center, the possibility of the entire data center losing power, resulting in all nodes not working, may be so slim that designing a system that responds to this possibility is unnecessary and expensive. However, if it is a critical system, it may make sense to design it to recuperate automatically after such an event.
Preventive healing proactively prevents errors. Take the example of proactively preventing processing time errors by using real-time data. You send an HTTP request to check the health of a service and better use resources. If it takes more than 500 milliseconds to respond, you design the system to scale it, and if it responds in less than 100 milliseconds, you design the system to descale it.
However, using real-time data can be troublesome if response times change a lot, because the system will scale and descale constantly (this can use a lot of resources in rigid architecture, and a smaller amount of resources in a microservices architecture).
Combining real-time and historical data is a better (and also more complex) preventive healing approach. Using our response time example, you design a system that stores response time, memory and CPU information and uses an appropriate algorithm to process it alongside real-time data to predict future needs. So, if memory usage has been increasing steadily for the past hour and reaches a critical point of 90 percent, your system determines that scaling is appropriate, thereby preventing errors.
Designing Self-Healing Systems: Three Principles & a Five-Point RoadmapPrinciples
People know their banks, like Chase and CapitalOne, and their favorite fintech applications, such as Venmo and Mint. However, consumers are generally not aware of data aggregators like Plaid and Finicity, which collect consumer data from banks, crunch it and feed it to fintech applications. Blockchain technology can play a key role in helping data aggregators manage consumer financial data while complying with regulations and empowering consumers.
Data AggregationData aggregators use two methods to access people’s financial information. The first (now largely outdated) method is screen scraping, in which a person provides their banking usernames and passwords in exchange for using a fintech application. The second (and preferred) method is API access, in which the bank and the data aggregator share information through a direct, technology-enabled feed.
Once data aggregators have collected someone’s financial information from different bank accounts, credit cards and investment accounts, they process and format it so that it can be fed to fintech applications. This enables someone to split a bill with a friend on Venmo or set a financial goal on Mint.
Data aggregators power fintech companies in fields like personal financial planning, investment, peer-to-peer payments, lending and foreign exchange.
Blockchain’s RoleIn addition to maintaining the trust of banks and fintech companies, data aggregators have to navigate the complex regulatory environment for handling sensitive consumer financial data. Blockchain technology can help data aggregators manage data in four key areas: security, privacy, analysis and auditability.
Blockchains store data in a decentralized, tamper-proof manner, thereby bolstering its security. When configured correctly, blockchains strengthen privacy by allowing for data to be stored, shared and analyzed without disclosing its contents.
Two promising blockchain protocols in this area include homomorphic encryption (which allows the analysis of encrypted data without knowledge of the data’s contents) and secure multi-party computation (which allows parties to trust each other and analyze each other’s data without revealing the contents of that data). These blockchain protocols can help data aggregators achieve their objectives of analyzing sensitive data from multiple sources while preserving its security and privacy.
Blockchain-based solutions also hold the potential for instant auditability, enabling any transacting party to easily verify compliance with the latest financial and privacy regulations.
The complex and fluid regulatory regime for consumer financial data includes federal legislation like Dodd-Frank, state laws and industry standards. Data aggregators argue that they should be regulated as consumers’ agents, which face less regulatory scrutiny than many other actors in financial services. However, even when regulations don’t directly apply to data aggregators, banks often bolster their own compliance with regulations by requiring data aggregators to enter into data sharing agreements as a condition of accessing consumer data.
A blockchain-based, tamper-proof and auditable transaction history, combined with pre-programmed rules ensuring that new transactions are compliant with current regulations, can greatly simplify regulatory compliance for data aggregators and help them maintain the trust of banks and fintech companies.
Industry Maturity & Open BankingThough data aggregators constitute a relatively new industry within financial services, there are signs that the industry is maturing. Banks that weren’t sure what to make of data aggregators five years ago see them as valued partners today. While data aggregators experienced massive growth and drew sustained investor interest over the past several years, there are also signs of industry consolidation.
Another sign of industry maturity is the creation of the Financial Data Exchange (FDX) in 2018. Today, FDX’s membership includes data aggregators, financial institutions, fintech companies and global consulting firms. FDX’s goal is to promote an Application Programming Interface (API) and standards for transparency, security and usability that put customers in control of their financial information.
FDX’s members include companies that are actively exploring the potential of blockchain technology to put customers in control of their data. For example, a research team at Visa recently recognized the potential of the blockchain to help share customer data with fintech applications. Visa’s plan to acquire Plaid for $5.3 billion could position Visa as a leader in data aggregation and blockchain.
Seizing The OpportunityData aggregators, firmly established as a layer between banks and fintech applications, are now well positioned to add value to banks by analyzing their data and to enable customers to control their own data. Blockchain technology will help them pursue both these objectives while keeping regulators happy.