Biocomputing - a cutting-edge field of technology - operates at the intersection of biology, engineering, and computer science. It seeks to use cells or their sub-component molecules (such as DNA or RNA) to perform functions traditionally performed by an electronic computer.
The ultimate goal of biocomputing is to mimic some of the biological ‘hardware’ of bodies like ours - and to use it for our computing needs. From less to more complicated, this could include:
Cells Already Compute
Cells are far more powerful at computing than our best computers. For example:
Biocomputing’s engineering challenge is to gain a granular level of control of the reactions between organic compounds like DNA or RNA.
Overheating & High Energy Use
Traditional computers use microchips, which heat up quickly. Supercomputers are usually a collection of several high-speed traditional computers, combined into a single unit. Generally, they are not qualitatively different from traditional computers. Even so, supercomputers use a lot of energy, heat up quickly, and require massive cooling units in order to function at full speed. On the other hand, biological matter can perform calculations and process data without using as much energy, and without heating up significantly.
Regular computers perform one task at a time and switch quickly between tasks to give the user a seamless experience of multiple tasks running simultaneously. Biological systems, on the other hand, engage in ‘parallel computation’ – whereby multiple tasks can be executed truly simultaneously.
Early proof-of-concept work has been completed using myosin - a superfamily of motor proteins which cause muscle contraction and convert chemical energy into mechanical energy. Myosin-enabled biocomputing could perform multiple computations simultaneously.
Self-Organizing and Self-Repairing
Biological molecules also display an intelligent ability to self-organize and self-repair. So, biocomputing engineers will have to find ways to simulate this intelligent ‘software’ on top of the biological molecule ‘hardware’ to produce, organize, and repair the biocomputing system.
Similar to a living organism the “software” in biological systems is responsible for producing and assembling the hardware which in turn will help run the software.
While biocomputing is in an early phase, biocomputers have the potential to enable far more powerful computing than today’s best computers – while using less energy and generating less heat. Furthermore, biocomputers will be able to use parallel computing, which will represent a significant improvement upon regular computing, and will be able to better self-organize and self-repair. While authoritative estimates of the eventual environmental impact of biocomputing do not yet exist, biocomputing could potentially reduce our reliance on the silicon and rare earth minerals that power today’s computers.
Be future ready by understanding today's emerging technologies.
Sensor fusion is the process of combining data from multiple physical sensors in real time, while also adding information from mathematical models, to create an accurate picture of the local environment. A system can then use this data to plan and act toward an objective or destination. Sensor fusion is an important part of the design of autonomous systems.
The cost of sensors has been declining for decades, while the quality of information collected by sensors has been increasing. Still, each type of sensor has its shortcomings. Even if a sensor provides a significant volume of high-quality data, the sensor could be thrown off under some conditions, leading to inaccurate readings. For example, a sensor on an autonomous vehicle could be thrown off by unusual weather, smog and pollution, speed, altitude, visibility, angle and positioning. By adding more sensors, engineers can often improve the accuracy of collected data.
The Process: Sense, Perceive, Plan, Act, Repeat
Sensors “sense” by collecting data from the physical world. Then, systems “perceive”, or interpret this data based on algorithms, use cases and requirements. Next, they “plan”, or find a path to move forward toward the desired outcome or destination. Lastly, based on their plans, they “act”, or follow a path toward the intended destination or outcome. The system goes on repeating these steps until it accomplishes its task or reaches its destination.
Direct and Indirect Sensor Fusion
Direct fusion happens when data originates from identical sensors in a given environment, while indirect fusion occurs when data originates from non-identical sensors in a given environment. For improved environmental awareness, it is important to boost both the quality and the quantity of data collected. So, ideally, a system should use both direct and indirect sensor fusion.
Sensor fusion can be achieved using a wide variety of sensors. These include: accelerometers, GPS, magnetic sensors, phased arrays, electronic support measures, seismic sensors, sonobuoys, radio telescopes, cameras, radar, LIDAR and sonar systems.
Benefits of Sensor Fusion
Sensor fusion has several benefits:
Today, sensor fusion is primarily used in autonomous vehicles like self-driving cars. However, research and testing is underway to use sensor fusion in other use cases in fields like space exploration, remote search and rescue, industrial internet, defense, and environmental monitoring. The field of sensor fusion is also seeing increased interest among academics and roboticists.
Sensor fusion increases the number of identical sensors, diversifies the types of sensors, and uses mathematical models to synthesize and refine information collected by sensors. Since sensor fusion enables more accurate localization, positioning, detecting and tracking, it improves autonomous systems’ situational awareness and makes the systems more consistent, accurate and dependable. The use of sensor fusion in autonomous vehicles will lead to significant innovation and know-how, which will find application in a wide variety of industries.
Be future ready by understanding today's emerging technologies.
What is Ambient Intelligence?
Mark Weiser, CTO of Xerox Corp's Palo Alto Research Center, said in 1991: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Weiser proved prescient: satellite-based cell phones and the internet are examples of profound, invisible technologies.
In the future, ambient intelligence will be similarly profound, yet invisible. Just as ambient music plays in the background to enhance an environment, ambient intelligence is embedded in a user’s immediate environment (or ambience). The ambience is embedded with a range of sensors, making the environment intelligent and ready to respond to user desires and needs. Ambient intelligence, once fully evolved, will have major consequences for many industries, including patient care and assisted living. Seniors will gain increased independence with ambient intelligence and their critical care providers will be kept informed with real-time data.
Excellent connectivity within our homes along with internet connected devices are helping us map behavioral data about ourselves. The idea is to push the sensors and computing into the background or into the ambience, while actions a user needs help with or information a user requires can be brought into the foreground using robotics or an interface which displays information. Ambient intelligence has the goal of enhancing the experiences of people carrying on with their daily routines and activities in intuitive ways.
Features of ambient intelligence are:
Ambient intelligence systems act in four ways: sensing, reasoning, acting and interacting.
Implementation & Trends
Ambient intelligence will improve mobility, nutrition, energy and resource use, waste management and many other fields. Systems will begin in smaller spaces, like homes or cars, and expand to workplaces, restaurants, airports and stations. Eventually, they will be embedded across larger open areas and smart cities. They will be adopted at varied paces across different nations, depending on their approaches and regulations for privacy, cybersecurity and control (government versus private).
Per technology trends, ambient intelligence is already becoming a reality. Mobile networks are becoming faster and have less latency, an estimated 50 billion smart devices are now online and wearables are experiencing significant growth. These are all key enablers of the ambient intelligence systems described earlier.
While ambient intelligence is currently in an early stage, enabling technologies are coming into place rapidly and growing exponentially. Once ambient intelligence reaches a mature state, we will be able to communicate with devices naturally efficiently. Human-machine interactions could potentially become more intuitive and efficient than human-to-human interactions. Computers and smart devices will also ‘converse’ with each other, with delegated authority from their human users. This could reduce peoples’ ‘cognitive load’, allowing them to focus on activities that are more complicated, fun or meaningful.
Be future ready by understanding today's emerging technologies.
We have long sought secure ways to exchange data. Some current methods include cryptography, hashing and requiring the solution of math problems that demand enormous computing power. Quantum computing could render some of our current methods insecure and obsolete, while enabling new methods.
Cryptography uses codes to protect information and communications. Data is encrypted using a secret key. The message as well as the secret key is given to the recipient. The recipient then uses the key to decrypt the message. The problem with cryptography is that if the key is compromised, anyone with the key can decipher the secret message.
Hashing is a protocol used in cryptography. It reveals the integrity of data by creating a ‘digital fingerprint’ of the original data. This is useful for verification (for example, to sign into your e-mail). The e-mail provider compares a hash of your data to a hash of your original data to verify that it is the same. So, the e-mail provider can verify that the person signing in knows the password, without storing your actual password (it stores only a hash of it). To learn more about cryptographic hashing, see my article here. Also see my article on merkle trees, which use hashing extensively.
Conventional key distribution
Current ciphers used to distribute keys use math problems to protect themselves from attackers. The problems are fairly simple to solve but need a large amount of processing power to complete. It’s easy to find the product of two prime numbers, but difficult to factor the product and derive the two prime numbers.
Such conventional key distribution methods will be rendered obsolete by advances in CPU power and the rise of quantum computers.
Quantum Threats to Encryption
Quantum computing will likely be able to break any encryption and hashing algorithms easily. It could also impact the integrity of blockchains. Quantum computing will challenge both asymmetric cryptography as well as hashing algorithms.
Enter quantum cryptography, which may enable the secure exchange of information, even in the presence of quantum computers.
Heisenberg Uncertainty Principle
Quantum key distribution proceeds by using light particles exchanged between sender and recipient to establish a key. This protocol uses logic outlined in the Heisenberg uncertainty principle or indeterminacy principle, which states that: “the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory.”
Quantum Key Distribution
Quantum key distribution addresses the challenges of distributing keys using quantum protocols. It is built based on laws of nature that are resistant to increasing computational power. These are physical processes that are not vulnerable to powerful computing systems.
Quantum Key Distribution is an optical technology which automates the delivery of encryption keys between parties that are sharing an optical link. There are two types of quantum key distribution systems:
How Does Quantum Key Distribution Work?
Quantum mechanics has a core characteristic: in quantum systems, the act of measuring the system disturbs the system. Two parties engaged in creating a key exchange protocol will require:
The transmitter sends the key to the receiver in the form of a stream of light particles, known as photons. These light particles have a spin which can be changed depending on which filter they are passed through. Polarized photons can be polarized in 4 different directions (vertical, horizontal, diagonal, left/right). The recipient measures which direction they are polarized in using two different polarized detectors randomly.
At the end of the process, the recipient will have a key of 1s and 0s. They then get on a call with the sender and compare notes on which detector they used for each photon. They throw out results in which they did not use matching detectors and filters, and keep the ones in which they used matching detectors. They now have a sequence of identically polarized and measured photons — thus forming the final key. Using this key, the recipient can decrypt the message which has been sent to them.
Both types of quantum key distribution protocols face the problem of transmission losses over large distances. Transmission losses increase exponentially as the distance increases.
Quantum Key Distribution holds enormous potential for secure data transfer. Researchers are refining both discrete-variable quantum key distribution and continuous-variable quantum key distribution. Once the distance related losses are reduced, this technology will power the next generation of secure, quantum computer-proof data transfer.
In the 20 thcentury, scientists and thinkers predicted that technology would change our lives fundamentally. Despite our technological achievements, these changes have not materialized. As Peter Theil said, “We wanted flying cars, instead we got 140 characters.”
Transformative new technologies rise, create new sectors and help us achieve phenomenal things. Then, at some point, they stop advancing. Output and productivity gains plateau, often for decades.
‘Tether Theory’ is my explanation for this phenomenon.
Tether Theory: The elements which help to create and establish a disruptive new technology eventually become the limitations on that technology’s further growth and innovation.
The ‘elements’ from the definition could include: founders, investors, established actors in the sector (like VCs, incubators and professional services firms), regulators and various stakeholders.
Per Tether Theory, sectors created by disruptive new technologies go through 6 stages:
Stage 2: Establishing Best PracticesThrough multiple swift cycles of iteration and experimentation, early movers establish a set of best practices and norms for the new technology. The sector’s dynamics are still rapidly evolving at this stage.
In some cases, multiple competing versions of the technology could vie for supremacy, often in platform wars. Some examples include:
Stage 3: StandardizationAs the technology continues to evolve, stakeholders and regulators gain a good enough understanding of it to craft standards for its use. After standardization, light regulation follows. Sometimes regulators create a sandbox to experiment with the technology and with different regulatory approaches.
Standardization gives a clear shape and form to the sector. This enables capital markets to more confidently step in and fund new projects in the sector. Enterprise firms begin to evaluate the technology and consider how to engage with it. A period of excitement, high competition and growth ensues. The mix of capital and competition helps the technology further evolve and improve.
Stage 4: Consolidation & EntrenchmentAs the sector grows and matures, a set of market leaders emerge. These firms wield a high level of influence in the sector and shape its dynamics. They also have large client or customer bases, deep pockets or both. Faced with competition from these incumbents, less competitive or smaller companies either go out of business or are acquired by the incumbents. The sector experiences consolidation.
A group of established stakeholders, such as financial and professional service providers, insurance firms, information providers and dependent industries form around the market leaders. These stakeholders usually crave stability and predictability.
The incumbent leaders of the sector work to retain their dominance. In doing so, they intentionally or unintentionally resist changes to the technology and to the industry’s dynamics.
Stage 5: Increased Regulation & DependenciesAs the technology becomes mainstream, it becomes subject to regulation. In the face of impending regulation, the market leaders, along with freshly formed industry associations, shape the conversation in a way that protects and furthers their interests. Complex regulation favors deep-pocketed market leaders and the professional ecosystem around them makes it difficult and expensive for new firms to enter the market.
However, regulation is a double-edged sword. Regulatory risk becomes a major concern at this stage. Even regulation that favors the industry’s status quo can hamper the incumbents’ further growth. Additionally, over time, the technology becomes subject to indirect regulation such as tax or trade policy. Each layer of regulation adds complexity, and consequently, incumbent attention to compliance.
As the sector matures, its processes develop dependencies with other industries and with established stakeholders. These dependencies further limit the flexibility of the sector’s companies.
Stage 6: StagnationThe sector now operates in a mature, layered regulatory environment and is subject to dependencies with entrenched players in other industries. Thus, it enters a period of stagnation or incremental growth. Innovative ideas must now go through routine, slow and bureaucratic processes. In this environment, experimentation becomes difficult, creativity is stifled and even innovative companies may see their corporate culture begin to decline.
A sector can exist in the stagnation stage for decades. Innovation is limited to tinkering with already-tested ideas. In this risk-averse environment, major technological change is unlikely to occur.
The airline sector is a great example of a sector in stagnation. Commentators examining the sector observed that “travel time across the Atlantic…for the first time since the Industrial Revolution, is getting longer rather than shorter.”
Closing ThoughtsTether Theory suggests that the people and organizations that help create, establish and entrench a new technology eventually prevent that technology from achieving its full potential. While standardization and regulation are important steps for a sector to mature, visionary founders and savvy policymakers should vigilantly protect creativity, experimentation and competition.
Tether Theory does not suggest that the world is not making important technological advancements. These advancements happen all the time and have major societal impacts. However, preventing — or at least postponing — entrenchment and stagnation could increase the beneficial impact of a disruptive technology and make sure that the gains from the technology are shared by a wide variety of stakeholders of different sizes and capabilities.
Edge Computing uses local devices to compute, store and communicate data. Edge computing can therefore be thought of as an extension of cloud services (which allow compute, storage, analytics, and other functions to be executed remotely) to the user’s local devices, speeding up computation and making it more secure.
So, Cloud Computing + Edge Devices = Edge Computing.
Edge DevicesEdge Devices include routers, routing switches, integrated access devices (IADs) and multiplexers. Edge devices enable users to connect to and share data with an external network — such as a service provider, carrier, or enterprise primary network.
Factors Leading To Edge ComputingEdge computing was driven by the following factors:
Cloud is supported by external data canters, which reduces data transfer speed due to network latency.
2. Limited Bandwidth
In addition to the distance, limited bandwidth further slows cloud data transfer.
Cloud data exchange is often subject to the privacy, security and compliance issues of multiple jurisdictions.
4. Configuration of the System
System integrators are required for cloud to function.
These problems with cloud computing lead to edge computing. The use of edge devices decentralizes cloud functions. These edge devices (like routers) act as facilitators to speed up data processing. So, user devices (like smartphones) can decide whether compute or store locally or send edge devices (like routers) data. Edge devices then decide which information they can process themselves and which information needs to be sent into the cloud for processing. So, many (and ideally most) functions can be performed by user devices and edge devices, with the convenience of cloud processing as necessary.
Internet Of Things (IOT)IoT — the phenomenon of embedding everyday physical objects (like thermostats, toasters and washing machines) with chips, software and sensors — enables these objects to be networked and to perform functions with feedback from one another and without human intervention. These objects thus become ‘smart’.
Since IoT requires the creation and processing of enormous amounts of data, cloud and edge solutions are crucial enablers of IoT (especially in use cases that involve sensitive data or Artificial Intelligence).
Take the example of a driver that wants her vehicle to signal to her house’s garage door that she has arrived. The short time period between her arrival and the garage door’s expected response is a challenge for cloud computing. Even minimal network latency or other network issues would remove the usefulness of IoT solutions here.
Enter edge computing. Most of the information in this transaction can be local, so the centralized cloud environment won’t be necessary in most cases. However, if there is an authentication issue, a cloud solution may be required — and edge computing still allows for this. Additionally, even in cases that cloud is required, it would be accessed at high bandwidth. So, edge computing enables multiple IoT cases.
For a further discussion of the IoT applications that edge computing enables, here is my earlier article describing seven use cases:
A word of caution: these privacy and security gains require significant upgrades in user devices themselves. Most user devices (such as microwaves) use dated firmware and hardware, which makes them insecure and provides an entry point into the network for bad actors. Cloud environments, especially those maintained by Amazon, Google and Microsoft, are generally highly secure. Since realizing the gains of decentralized edge computing requires a new generation of user devices, these gains will likely be realized in the medium or long term.
What are Self Healing Systems & How Can You Develop One?
When people get injured, their bodies self-heal. What if technology could do the same?
Companies are racing to develop self-healing systems, which could improve quality, cut costs and boost customer trust. For example, IBM is experimenting with ‘self-managing’ products that configure, protect and heal themselves.
What Is A Self Healing System?A self-healing system can discover errors in its functioning and make changes to itself without human intervention, thereby restoring itself to a better-functioning state. There are three levels of self-healing systems, each of which has its own size and resource requirements:
In typical applications, problems are documented in an ‘exceptions log’ for further examination. Most problems are minor and can be ignored. Serious problems may require the application to stop (for example, an inability to connect to a database that has been taken offline).
By contrast, self-healing applications incorporate design elements that resolve problems. For example, applications that use Akka arrange elements in a hierarchy and assign an actor’s problems to its supervisor. Many such libraries and frameworks facilitate applications that self-heal by design.
Unlike application level self-healing, system level self-healing does not depend on a programming language or specific components. Rather, it can be generalized and applied to all services and applications, independent of their internal components.
The most common system level errors include process failures (often resolved by redeploying or restarting) and response time issues (often resolved by scaling and descaling). Self-healing systems conduct health checks on different components and automatically attempt fixes (such as redeploying) to recuperate to their desired states.
Hardware level self-healing redeploys services from an unhealthy node to a healthy one. It also conducts health checks on different components. Since true hardware level self-healing (for example, a machine that can heal failed memory or repair a broken hard disk) does not exist, current hardware level solutions are essentially system level solutions.
Reactive Versus Preventive HealingReactive Healing
Reactive healing is healing in response to an error and is already in widespread use. For example, redeploying an application to a new physical node in response to an error, thereby preventing downtime, is reactive healing.
The desirable level of reactive healing depends on how much risk a system can tolerate. For example, if a system relies on a single data center, the possibility of the entire data center losing power, resulting in all nodes not working, may be so slim that designing a system that responds to this possibility is unnecessary and expensive. However, if it is a critical system, it may make sense to design it to recuperate automatically after such an event.
Preventive healing proactively prevents errors. Take the example of proactively preventing processing time errors by using real-time data. You send an HTTP request to check the health of a service and better use resources. If it takes more than 500 milliseconds to respond, you design the system to scale it, and if it responds in less than 100 milliseconds, you design the system to descale it.
However, using real-time data can be troublesome if response times change a lot, because the system will scale and descale constantly (this can use a lot of resources in rigid architecture, and a smaller amount of resources in a microservices architecture).
Combining real-time and historical data is a better (and also more complex) preventive healing approach. Using our response time example, you design a system that stores response time, memory and CPU information and uses an appropriate algorithm to process it alongside real-time data to predict future needs. So, if memory usage has been increasing steadily for the past hour and reaches a critical point of 90 percent, your system determines that scaling is appropriate, thereby preventing errors.
Designing Self-Healing Systems: Three Principles & a Five-Point RoadmapPrinciples
People know their banks, like Chase and CapitalOne, and their favorite fintech applications, such as Venmo and Mint. However, consumers are generally not aware of data aggregators like Plaid and Finicity, which collect consumer data from banks, crunch it and feed it to fintech applications. Blockchain technology can play a key role in helping data aggregators manage consumer financial data while complying with regulations and empowering consumers.
Data AggregationData aggregators use two methods to access people’s financial information. The first (now largely outdated) method is screen scraping, in which a person provides their banking usernames and passwords in exchange for using a fintech application. The second (and preferred) method is API access, in which the bank and the data aggregator share information through a direct, technology-enabled feed.
Once data aggregators have collected someone’s financial information from different bank accounts, credit cards and investment accounts, they process and format it so that it can be fed to fintech applications. This enables someone to split a bill with a friend on Venmo or set a financial goal on Mint.
Data aggregators power fintech companies in fields like personal financial planning, investment, peer-to-peer payments, lending and foreign exchange.
Blockchain’s RoleIn addition to maintaining the trust of banks and fintech companies, data aggregators have to navigate the complex regulatory environment for handling sensitive consumer financial data. Blockchain technology can help data aggregators manage data in four key areas: security, privacy, analysis and auditability.
Blockchains store data in a decentralized, tamper-proof manner, thereby bolstering its security. When configured correctly, blockchains strengthen privacy by allowing for data to be stored, shared and analyzed without disclosing its contents.
Two promising blockchain protocols in this area include homomorphic encryption (which allows the analysis of encrypted data without knowledge of the data’s contents) and secure multi-party computation (which allows parties to trust each other and analyze each other’s data without revealing the contents of that data). These blockchain protocols can help data aggregators achieve their objectives of analyzing sensitive data from multiple sources while preserving its security and privacy.
Blockchain-based solutions also hold the potential for instant auditability, enabling any transacting party to easily verify compliance with the latest financial and privacy regulations.
The complex and fluid regulatory regime for consumer financial data includes federal legislation like Dodd-Frank, state laws and industry standards. Data aggregators argue that they should be regulated as consumers’ agents, which face less regulatory scrutiny than many other actors in financial services. However, even when regulations don’t directly apply to data aggregators, banks often bolster their own compliance with regulations by requiring data aggregators to enter into data sharing agreements as a condition of accessing consumer data.
A blockchain-based, tamper-proof and auditable transaction history, combined with pre-programmed rules ensuring that new transactions are compliant with current regulations, can greatly simplify regulatory compliance for data aggregators and help them maintain the trust of banks and fintech companies.
Industry Maturity & Open BankingThough data aggregators constitute a relatively new industry within financial services, there are signs that the industry is maturing. Banks that weren’t sure what to make of data aggregators five years ago see them as valued partners today. While data aggregators experienced massive growth and drew sustained investor interest over the past several years, there are also signs of industry consolidation.
Another sign of industry maturity is the creation of the Financial Data Exchange (FDX) in 2018. Today, FDX’s membership includes data aggregators, financial institutions, fintech companies and global consulting firms. FDX’s goal is to promote an Application Programming Interface (API) and standards for transparency, security and usability that put customers in control of their financial information.
FDX’s members include companies that are actively exploring the potential of blockchain technology to put customers in control of their data. For example, a research team at Visa recently recognized the potential of the blockchain to help share customer data with fintech applications. Visa’s plan to acquire Plaid for $5.3 billion could position Visa as a leader in data aggregation and blockchain.
Seizing The OpportunityData aggregators, firmly established as a layer between banks and fintech applications, are now well positioned to add value to banks by analyzing their data and to enable customers to control their own data. Blockchain technology will help them pursue both these objectives while keeping regulators happy.
Radio Frequency Identification (RFID) uses readers and tags to transfer data via radio waves. A reader can communicate with a tag some distance away (between a few centimeters and 20 meters, depending on the type of RFID). Active RFID tags have batteries, which they can tap to send information to a reader. Passive RFID tags do not have batteries; they use a reader’s electromagnetic energy to communicate with the reader.
Unlike barcodes, RFID tags can be read even if they are not within a reader’s line of sight. Compared to barcodes, RFID tags are more expensive, bulkier and more prone to physical and electrical damage.
While RFID tags are often placed on top of or inside objects, they need not be. You can bury an RFID marker one inch below the ground and put information on that tag indicating what kind of material is further below the tag, and at what depth. For example, data on a tag one inch below ground could reveal that a sewer pipe exists 8 feet below the tag.
Initially used to identify enemy aircraft in World War 2, RFID is used by Fortune 500 companies for logistics. In 2004, Walmart spent $50 million on RFID initiatives. Today, in France, a piping company is using an RFID system to track buried polyethylene pipes, allowing users to write data to tags in pipes up to 1.6 meters away and in Taipei, workers are using RFID to interrogate tags within manholes without opening them, up to a depth of 2 feet.
By 2020, the value of the RFID market will exceed $24.5 billion.
RFID Use In Industries
RFID systems have found application in the following industries:
1 Retail: RFID streamlines business processes, enabling faster shipping, better inventory management and better productivity. Walmart, Target and Tesco are among the leading users of RFID for retail.
2 Pharmaceuticals and Health Care: The Food and Drug Administration is weighing how to use RFID to encourage pharmaceutical manufacturers, wholesalers and retailers to thwart counterfeiting. In hospitals and clinics, RFID tagging of medical assets (and even patients) helps reduce errors and cut costs.
3 Airline baggage: Misdirected or lost bags can cost airlines over $200 per bag. To reduce these costs, a handful of airports have piloted RFID for baggage tagging, to reduce operational costs and improve outcomes for passengers.
4 Airplane parts: Boeing and Airbus use passive RFID tags to track and maintain airplane parts on their latest aircraft. The Federal Aviation Administration allows the use of passive RFID tags, provided that the tags are not interrogated while planes are in use.
5 Animal tagging: Tracking animals from farm to table is of interest to farmers, livestock professionals, food service providers, restaurants and consumers. In addition to location tracking, RFID tags are useful for collecting and analyzing long-term trends such as genetic problems and health issues.
6 Passports: US passports include digital biometric data in embedded RFID chips, since RFID can store large, high-quality image files.
7 Libraries: In libraries, RFID chips are sometimes used instead of barcodes, to improve the book checkout process without the need of a librarian.
RFID Use Cases
Additionally, RFID systems power the following use cases:
1 Logistics & Supply Chain Visibility: In chaotic manufacturing, shipping and distribution environments, real-time data on the status of individual items provides insights that can be converted into actionable measures. RFID can identify each unique SKU and distinguish products based on style, color and size, while allowing complete traceability throughout the supply chain. So, logistics can be fully automated, minimizing errors, boosting stock control accuracy to roughly 99% efficiency and reducing out-of-stocks.
2 Item-level Inventory: RFID systems allow sellers to quickly take stock of goods. In a retail environment, this could mean that a store employee can count inventory in a few minutes using a handheld RFID reader.
3 Timing: RFID can be used to time how long an object takes to get from one place to another. So, RFID chips are popular in races.
4 Tracking Conference Attendees: RFID is increasingly used to track attendees at conferences, reducing the need for registration desks, tracking which events are well attended and collecting various kinds of data to inform decision making.
5 Managing Materials: On large project sites, including construction sites, it can be hard to locate materials. RFID systems enable up-to-date material locations, resulting in higher worker productivity and making planning easier.
6 Tracking IT Assets: Most organizations invest significantly in IT assets such as servers, laptops, tablets, phones and other peripherals. Using RFID tags to track these assets gives IT staff the ability to quickly account for assets, improving efficiency and security.
Currently, organizations generally use RFID systems within closed loops - each organization tracks objects only while they are within that organization’s control. Since most organizations operate beyond closed loops (for example, by buying or selling goods), building interoperable RFID systems will allow organizations to better realize the benefits of RFID tracking.
As RFID becomes more affordable, it will see rapid adoption in pharmaceuticals, health care, food safety and retail, for both tracking and anti-counterfeiting uses. Previously, RFID could only operate within specific environments. This is changing – some tags can now be used in extreme temperatures or in chemical contaminants. In the next decade, the number of RFID tags, the value of the RFID market, the variety of RFID tags, and the number of RFID use cases can all be expected to skyrocket.
From changing the way we eat to the way we think, CRISPR has the potential to change future generations - literally. ‘Designer babies’ is not the most, but probably the least that it can do.
What is CRISPR?
CRISPR stands for Clusters of Regularly Interspaced Short Palindromic Repeats. These are short stretches of nucleotide sequences crucial for the immune systems of bacteria and archaea.
It’s being touted as a cheap, efficient tool for gene editing.
CRISPR: Biology in Action
In the case of bacteria, when primarily a virus body (aka bacteriophage) attacks a bacterium, it injects its genetic material which consists of single-stranded RNA (ssRNA) into the cell. The viral genome then takes over or hijacks the bacterial machinery to make numerous copies of itself.
The CRISPR region has numerous repeats of CRISPRs, where bacteria incorporate small nucleotide sequences specific to the virus invading them known as ‘spacers.’ So the CRISPRs have these spacers between them and each viral invasion adds a spacer into the region.
The spacers are specific small sequences of the viral RNA that the bacteria keep as memories, just in case the same virus attacks again. So when the virus attacks again, the spacers are used as templates to make a CRISPR-RNA or crRNA that has complementarity to the specific sequence on the viral RNA. The crRNA goes and binds to the foreign viral RNA, and the Cas9 (CRISPR-associated) protein cleaves in a particular position, leaving the viral genome ineffective to make its copies or do anything at all.
The Cas9 protein typically binds to two RNA molecules: crRNA and another called trans-activating crRNA or tracrRNA. These two RNA molecules guide Cas9 to the target site to make its final cut. This target sequence is complementary to a 20-nucleotide long stretch of the crRNA.
It has been researched and found that the protein makes a double-stranded break, i.e. it cuts both strands of a DNA’s double helix. Now, if you’re wondering why the Cas9 doesn’t attract the bacteria’s own DNA, it is because there is another mechanism to ensure this. Short DNA sequences called protospacer adjacent motifs or PAMs, are sort of tags that sit adjacent to the target DNA sequence. And if PAM is not present, it doesn’t make a cut. This is possibly why the protein doesn’t ever attack the CRISPR region in the bacteria.
CRISPR for Human Research
Now that we’ve understood the natural phenomenon and imagined it in action, what exactly is the purpose of it? Gene editing isn’t anything new but, at the same time, isn't old enough for us to be sure of what we’re doing. It takes years and years of research to publish one result.
In fact, the first time we got to see what CRISPR looks like in action was only two years ago in 2017! A team of researchers led by Mikihiro Shibata of Kanazawa University and Hiroshi Nishimasu of the University of Tokyo made this possible. Here’s the Breathtaking New GIF Shows CRISPR Chewing Up DNA.
Genetic modification isn’t too new as we’ve been cultivating crops by selective breeding practices, to increase the quality of produce, for centuries. But the production of the first genetically modified food to be granted a license for human consumption only goes back to 1992. Researchers genetically modified tomatoes to remain firm and ripe for their short-shelf life and named them Flavr Savr. They didn’t selectively breed the tomatoes, but modified the genes specific to the ripening and firmness of tomatoes and reproduced them.
When we refer to gene-modification or editing, we mean doing so by removing a particular nucleotide (or more) from the sequence, substituting it with another, or adding a new nucleotide to the sequence.
Any changes in the gene sequence - called mutations - affect the proteins to be synthesized by them, which are responsible for the characters and features of the organism. For example, sickle-cell anemia is caused due to a point mutation, which is a change in one nucleotide of a gene sequence. As a result of the mutation, the red blood cells become sickle-shaped, and a lot of problems like general body pain, a reduced ability to fight infections, and vision issues arise. DNA profiling at the embryonic stage can tell if a baby could be born with a genetic disorder. This can then be genetically modified to reverse the mutation, and have the baby be born with no abnormality. (A side-fact: sickle-cell anemia provides a genetic resistance to malaria.)
A classic example of treating a genetic disorder by gene modification is Adenosine Deaminase or ADA Deficiency. Children born with ADA deficiency have virtually no immunity to microorganisms and are diagnosed with severe combined immunodeficiency (SCID). (These babies are kept inside bubbles free of any microorganisms to keep them alive, and are therefore called bubble babies.) Most of these babies don’t survive past the age of 2. The deficiency can be treated by enzyme replacement therapy or ERT in which they are given (through injections) the adenosine deaminase enzyme for the development and functioning of the immune system. But the problem with ERT is that the enzyme has to be introduced into the body time and again. Because of the nature of the disorder, it becomes a potential candidate for gene therapy.
Gene therapy is the mechanism of introducing a gene in the body of an organism. Reproductive T-cells from ADA sufferers are taken out of the body, and modified to carry the corrected gene which can produce ADA. These are injected back into the body, which can then reproduce to make normal immune cells.
But what is the drawback? Gene therapy wasn’t so effective before CRISPR, given that anything could go wrong at any step. Incorporating the change at the right position is crucial for the gene’s function, but is also very challenging. Other gene-editing tools also come with many challenges and are time-consuming and expensive too. CRISPR on the other hand, is cheaper, more efficient and much more flexible and is consequently gaining a lot of traction.
Two 2012 research papers were pivotal in the study of using CRISPR. Published in journals Science and PNAS, the papers helped transform the bacterial defence mechanism into an efficient, programmable gene-editing tool.
Thanks to the studies, we know that Cas9 can be directed to cut any region of DNA. We can simply change the crRNA nucleotide sequence to bind to the complementary DNA target. Martin Jinek and his colleagues simplified the system further by fusing crRNA and tracrRNA, to create a single ‘guide-RNA’. And so the genome editing with CRISPR only requires the two components, guide-RNA and Cas9.
Moreover, designing a stretch of 20 nucleotide base pairs (hydrogen-bonded nucleotide pairs that form the two strands of a double-stranded DNA) matching a gene we want to edit, is achievable. The RNA with these 20 base pairs that are only found in the target gene and ‘nowhere else in the genome’ is vital.
With CRISPR cuts at very specific positions can be made. It doesn’t care about the sequence of the crRNA. We can make our own crRNA complementary to the gene we want to make changes to. Our cells have their own machinery and mechanisms to be able to join back the cut ends. The cell may join them back as it is, which may introduce mutations. However we can also give our own sequences having ends acting as templates to join the cut blunt ends and thus ‘repair’ the cut - and voila, the gene has successfully been edited, theoretically speaking.
Gene Editing Before CRISPR
Zinc Finger Nucleases and Transcription Activator-Like Effector Nucleases (TALENs) dominated the scene before CRISPR was heralded as the gene-editing tool. These tools can each cut DNA like CRISPR, but making and using them is difficult. However, they have their own applications and advantages:
ZFNs have an easier delivery process to the target gene. TALENs seem to have a higher precision rate than CRISPR but may cause less off-target mutations or unintended consequences. Research using these tools is still going on.
Biotech company Cellectis uses TALEN gene-editing technology to make CAR-T therapies for leukemia, and Sangamo BioSciences makes ZFNs that can disable a gene known to be key in the HIV infections.
Cas9 Challenges and Its Alternative: Cpf1
CRISPR-Cpf1 has several advantages over the CRISPR-Cas9 technique, with significant implications for research and therapeutics.
Cpf1 is similar to Cas9 in function, i.e. it cuts the target DNA.
Applications of CRISPR
CRISPR can very well be used in producing crops and animals that are healthier and environmentally resilient, for example BT crops.
Experiments on mice that share more than 90% of human genes have shown that CRISPR can knock-off a defective gene associated with Duchenne Muscular Dystrophy (DMD), eliminate the HIV infection and inhibit the formation of deadly proteins involved in Huntington’s disease.
Chinese scientists in 2015 created two ‘super muscular’ beagles by disabling a gene that directs normal muscle development.
Other CRISPR animal studies have ranged from genetically modifying long-haired goats for higher production of cashmere, and breeding hornless cows to eliminate the pain of shearing horns off.
Human research mostly moves the slowest due to ethical and regulatory issues. And will continue to remain slow due to the permanent nature of altering the human genome.
Pharmaceuticals and Biotechnology
This is probably where the most important ends meet, the future of medicine can be rewritten with CRISPR. The current drug discovery process is too long, given the need to ensure patient safety and gain a thorough understanding of side effects. One drug can take more than a decade to make its way to shelves, and then most likely eventually be banned because of side effects and complications. CRISPR can bring more customized therapy to the market more quickly, speeding up the traditional drug discovery process.
CRISPR allows accurate and fast identification of potential gene targets for efficient pre-clinical testing. And since it can knock-off particular genes, CRISPR gives researchers a faster and more affordable way to study more genes, in order to know which ones are affected by a disease. It can also provide more ways to treat patients and to design more efficient antibiotics.
CRISPR is also a more efficient method of gene therapy to treat single-gene disorders such as ADA deficiency, beta-thalassemia and sickle-cell anemia.
CRISPR can also be used to successfully combat the growing problem of antibiotic-resistance, in which bacterial strains become resistant to existing antibiotics, rendering the infection untreatable.
Food & Agriculture
In the 2000s, when the ins and outs of CRISPR were still unclear, scientists at yogurt company Danisco used an early version of CRISPR, to combat a key bacterium found in milk and yogurt cultures that kept getting infected by viruses.
Now, when climate change hinders the production of food and agriculture, CRISPR will be needed in cultivation processes. For example, cacao is becoming increasingly more difficult to grow as farming regions are becoming hotter and drier. Environmental changes will also accompany the growth of new pathogens and microorganisms that are non-existent today.
Gene editing can make farming more efficient, and curb global food shortages for crops like potatoes and tomatoes. Crops can also be made resilient and resistant to droughts and pathogens.
Another interesting area is the production of learner livestock. In October 2017, researchers at the Chinese Academy of Sciences in Beijing used CRISPR to genetically engineer pig meat to have 24% less body fat.
CRISPR can be used to re-engineer microbes and create new materials. We can alter microbes to increase diversity, make more efficient biofuels and create new bio-based, environmentally friendly materials.
Limitations of CRISPR and Why It’s Being Held Back
CRISPR’s potential benefits don’t end here, the list isn’t even fully defined yet; however they don’t come without their limitations. Regulatory bodies are holding CRISPR back, and slowing down research because we still don’t understand the long-term consequence of editing genes and genomes.
When CRISPR is used for human gene therapies, safety will be the biggest concern. It is a brand new tool, and may have a wide range of side effects that we may have no knowledge of. The main concern here is the off-target activity. While, theoretically, a single-gene edit reverses a mutation that causes a disease, it can also cause an unintended activity elsewhere in the genome. Similar to side effects that happen with drugs we take for medicinal purposes. A plausible consequence is also an abnormal growth of tissue leading to cancer.
Another issue is that a mosaic generation can be formed. CRISPR can lead to a person having both edited and unedited cells - a mosaic, which can give them mixed characteristics such as having two complexions.
Moreover, immune system complications can also arise, which means that interventions and therapies may trigger an undesired response from a patient’s immune system.
Gene editing can also lead to biological activities due to a lack of precision, as with Cas9 protein that leaves blunt ends. While this can be combated by using Cpf1 instead of Cas9, other limitations may still remain.
Bringing Back The Extinct
A fantastic idea to make real-life museums - edit the genome of the embryo of the closest living relative of the extinct animal and bring them back to life. These initiatives are already being pursued by different scientific groups and organizations. But should we bring back what’s already gone? We don’t know what effects this may have on the human population and other species as we are gradually evolving to live without extinct organisms.
Pregnant couples can be told by their doctor whether their child has the possibility of having a genetic disorder, for example Down Syndrome, which is very common. Whether the couple decides to abort the child or not is their personal choice. An estimated 92 percent of women who undergo prenatal diagnosis of Down Syndrome choose to have an abortion. Is there a way you can save the baby from having the syndrome? Yes. Gene editing. And CRISPR allows a much easier and cheaper way of doing so. This can be done for multiple genetic disorders that affect humankind.
If you’ve read or heard about the Chinese scientists editing genes of an embryo, you would also have heard about the global outcry they have received for doing so. These scientists had wanted to make the baby resistant to HIV, smallpox and cholera. However in the scientific community, the use of CRISPR or any other gene-editing tool to edit human babies is considered highly unethical, and is not even legal. This is because when genetic modification is done to a germ (reproductive) cell, the change is permanent, and will follow in generations to come unless it’s modified again, but the state of natural normalcy will never be achieved.
You may want to ensure that your baby is resistant to a disease that they could get through the use of genetic modification. Others may want to ensure that their child has a specific eye color, or height and so on. It is from this concept that the term designer babies originates. There is growing concern that there will be no end to what people may choose to genetically modify in their children, and what this may mean for the future. To choose to bring a change in your baby would mean that you’re deciding the fate of a human being that hasn’t even been born yet.
CRISPR is a breakthrough technology which will ultimately change the world, who we are, how we live and possibly even bring extinct species back to life. It will change our eating habits as well the food that we eat. It will help us optimize modern medicines so we can fight infections, diseases and genetic disorders more efficiently. CRISPR allows us to edit genes and work with biology in a way which was never before possible.