- Introduction
- Chapter 1: The Genesis of Modern AI: Early Pioneers and Foundational Concepts
- Chapter 2: Deep Learning Revolution: Unveiling the Power of Neural Networks
- Chapter 3: AI in Healthcare: Transforming Diagnosis, Treatment, and Patient Care
- Chapter 4: The Ethics of AI: Navigating Bias, Transparency, and Accountability
- Chapter 5: The Future of Work: AI's Impact on Employment and the Economy
- Chapter 6: Biotech's Beginnings: From Early Discoveries to the CRISPR Revolution
- Chapter 7: Gene Editing: Rewriting the Code of Life
- Chapter 8: Personalized Medicine: Tailoring Treatments to the Individual
- Chapter 9: Synthetic Biology: Engineering New Life Forms
- Chapter 10: Biotech's Ethical Crossroads: Balancing Innovation and Responsibility
- Chapter 11: The Solar Surge: Harnessing the Power of the Sun
- Chapter 12: Wind Warriors: Capturing Energy from the Breeze
- Chapter 13: Energy Storage Solutions: Batteries, Grids, and Beyond
- Chapter 14: The Policy Landscape: Driving the Transition to Clean Energy
- Chapter 15: Clean Energy's Global Impact: A Sustainable Future for All
- Chapter 16: The New Space Race: Private Companies and the Quest for the Cosmos
- Chapter 17: Rocket Science Revolution: Reducing the Cost of Space Travel
- Chapter 18: Mission to Mars: The Challenges and Triumphs of Interplanetary Exploration
- Chapter 19: Beyond Mars: Exploring the Outer Reaches of Our Solar System
- Chapter 20: Space Exploration's Impact on Earth: Technologies and Discoveries
- Chapter 21: The Power of Convergence: Breaking Down Disciplinary Silos
- Chapter 22: Nanotechnology and Biotechnology: A Powerful Partnership
- Chapter 23: AI and Climate Change: Data-Driven Solutions for a Global Crisis
- Chapter 24: Materials Science and Clean Energy: Building a Sustainable Future
- Chapter 25: The Collaborative Imperative: Research and Development in the 21st Century
Charting New Frontiers
Table of Contents
Introduction
The modern world is defined by an astonishing rate of change, propelled by unprecedented breakthroughs in science and technology. We stand at the cusp of a new era, where the boundaries of what's possible are constantly being redefined. This rapid transformation is not accidental; it is the result of the tireless work, unwavering dedication, and visionary thinking of a select group of individuals – the innovators at the helm of modern science and technology. Charting New Frontiers: The Innovators at the Helm of Modern Science and Technology delves into the lives, minds, and groundbreaking achievements of these remarkable individuals.
This book offers a journey through the most exciting and impactful frontiers of scientific and technological advancement. From the intricate world of artificial intelligence to the vast expanse of space exploration, we explore the pivotal discoveries and revolutionary technologies that are reshaping our present and shaping our future. We'll examine how these pioneers are not only pushing the limits of human knowledge but also tackling some of the most pressing global challenges, from climate change and disease to resource scarcity and inequality.
The innovators profiled in this book represent a diverse range of fields, including artificial intelligence, biotechnology, clean energy, and space exploration, as well as those who are brilliantly merging those fields. They are scientists, engineers, entrepreneurs, and visionaries who share a common trait: a relentless pursuit of knowledge and a deep commitment to making a positive impact on the world. They are individuals who have dared to dream big, challenge conventional wisdom, and overcome seemingly insurmountable obstacles to achieve their goals.
Beyond the technical details of their discoveries, we delve into the personal stories of these trailblazers. We explore their motivations, their inspirations, and the challenges they faced along the way. These narratives provide a glimpse into the human side of innovation, revealing the perseverance, resilience, and creativity required to make groundbreaking advancements. Their stories are not just tales of scientific triumph, but also testaments to the power of human ingenuity and the enduring spirit of exploration.
This book is structured to provide a comprehensive overview of the key areas of innovation driving our world forward. We will examine the pioneers of artificial intelligence, the biotech trailblazers, the clean energy innovators, and the space exploration visionaries. These sections will provide a deep understanding of the key advances in those fields and the major players involved. Additionally, we will highlight those that are bridging the traditional gaps between those fields to tackle complex problems in a cross-disciplinary fashion.
Charting New Frontiers is intended for anyone curious about the forces shaping the future. Whether you're a student, educator, tech enthusiast, or simply someone seeking to understand the transformative power of science and technology, this book offers valuable insights and inspiration. It is a celebration of human ingenuity and a testament to the extraordinary potential that lies within us to create a better world. It is a call to action, encouraging readers to embrace the spirit of innovation and contribute to the ongoing quest for knowledge and progress.
CHAPTER ONE: The Genesis of Modern AI: Early Pioneers and Foundational Concepts
The story of Artificial Intelligence (AI) isn't a sudden burst of 21st-century ingenuity. It's a tale woven through decades, a tapestry of brilliant minds grappling with the very definition of intelligence and striving to replicate it in machines. Before we had self-driving cars, virtual assistants, or algorithms that could beat grandmasters at chess, there were dreamers, mathematicians, and logicians laying the conceptual groundwork for what would become one of the most transformative technologies of our time. They may not have always seen the complete picture, but collectively created the foundations.
The roots of AI can be traced back to antiquity, with myths and stories of artificial beings endowed with intelligence or consciousness. However, the formal pursuit of AI as a scientific discipline began in the mid-20th century. The 1940s and 50s saw a confluence of factors that made the dream of thinking machines seem, for the first time, attainable. The invention of the programmable digital computer provided the essential hardware, while breakthroughs in neurology and information theory provided tantalizing hints about the workings of the human brain.
One of the earliest and most influential figures in this nascent field was Alan Turing, a British mathematician and logician. Turing is best known for his crucial role in breaking the German Enigma code during World War II, but his contributions to the theoretical underpinnings of AI are equally profound. In his seminal 1950 paper, "Computing Machinery and Intelligence," Turing proposed what is now known as the "Turing Test" – a benchmark for determining whether a machine can exhibit intelligent behavior indistinguishable from that of a human.
The Turing Test, in its simplest form, involves a human evaluator engaging in natural language conversations with both a human and a machine, without knowing which is which. If the evaluator cannot reliably distinguish the machine from the human, the machine is said to have passed the test. This seemingly simple concept sparked decades of debate and research, forcing scientists to grapple with fundamental questions about the nature of intelligence, consciousness, and the very possibility of creating truly "thinking" machines. He wanted us to change the definition of thinking.
Around the same time, Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, made another significant contribution. In 1943, they published a paper titled "A Logical Calculus of the Ideas Immanent in Nervous Activity," which proposed a mathematical model for artificial neural networks. Their model, inspired by the structure of biological neurons in the brain, demonstrated how simple interconnected units could perform logical computations. This work laid the foundation for the development of artificial neural networks, which are now a cornerstone of modern AI.
The McCulloch-Pitts neuron, as it came to be known, was a highly simplified abstraction of a real neuron. It received inputs, each with an associated weight, and produced an output based on a threshold function. While rudimentary, this model showed that networks of these artificial neurons could, in theory, perform complex calculations. This was a groundbreaking idea, suggesting that the seemingly mysterious workings of the brain could be replicated, at least in principle, using mathematical and engineering principles. Could the mind be boiled down to maths?
The 1956 Dartmouth Workshop is widely considered the birthplace of AI as a distinct field of research. Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, the workshop brought together a small group of researchers who shared a common goal: to explore the possibility of creating machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." The term "Artificial Intelligence" itself was coined by McCarthy for this workshop.
The Dartmouth Workshop, while not producing any immediate breakthroughs, was pivotal in setting the agenda for AI research for the next several decades. The participants were optimistic, even boldly so, about the prospects of achieving human-level AI within a generation. This initial optimism, fueled by early successes in areas like problem-solving and game-playing, would later give way to periods of disillusionment and reduced funding, often referred to as "AI winters." It was a time for unfettered experimentation though.
One of the early successes that fueled this initial optimism was the development of programs that could play games like checkers and chess. Arthur Samuel, at IBM, created a checkers-playing program in the late 1950s that could learn from its own mistakes and improve its performance over time. This was a significant demonstration of machine learning, a key component of AI. Samuel's program used a technique called "minimax search" to evaluate possible moves and choose the one that maximized its chances of winning.
Another notable early AI program was the General Problem Solver (GPS), developed by Allen Newell and Herbert A. Simon at Carnegie Mellon University. GPS was designed to solve a wide range of problems, from proving theorems in logic to solving puzzles like the Tower of Hanoi. It used a technique called "means-ends analysis," which involved breaking down a problem into smaller subgoals and then finding a sequence of actions to achieve those subgoals. It was thought this would unlock the secret of thinking.
These early AI programs, while impressive for their time, were limited in their scope and capabilities. They were often brittle, meaning they could only solve problems within a narrow domain and struggled to adapt to new or unexpected situations. They also lacked the common sense knowledge and reasoning abilities that humans take for granted. These limitations led to a period of reassessment and a shift in focus towards more specialized areas of AI research, such as expert systems, natural language processing, and computer vision.
Expert systems, developed in the 1970s and 80s, aimed to capture the knowledge and expertise of human experts in specific domains, such as medical diagnosis or financial analysis. These systems used a knowledge base of facts and rules, along with an inference engine, to reason about problems and provide recommendations. While expert systems found some commercial success, they were also criticized for their inability to learn from experience and their difficulty in handling uncertainty. Creating the knowledge base proved very labour intensive.
Natural language processing (NLP), another major focus of early AI research, sought to enable computers to understand and generate human language. Early NLP systems used rule-based approaches, relying on handcrafted grammars and dictionaries. These systems had limited success in handling the complexities and ambiguities of natural language. Progress in NLP accelerated significantly with the advent of statistical methods and machine learning techniques, leading to more robust and adaptable systems. The quest for more data began.
Computer vision, the field of AI that deals with enabling computers to "see" and interpret images, also has its roots in the early days of AI research. Early work in computer vision focused on tasks like object recognition and scene understanding. These early systems were often based on handcrafted features and rules, and their performance was limited by the computational resources available at the time. The development of more powerful computers and new algorithms, such as convolutional neural networks, have revolutionized computer vision.
Despite the challenges and setbacks, the early pioneers of AI laid a crucial foundation for the field's subsequent development. Their work on artificial neural networks, symbolic reasoning, problem-solving, and game-playing established many of the core concepts and techniques that are still relevant today. They also sparked important philosophical debates about the nature of intelligence and the potential for creating artificial minds. These were only the first few chapters of the story.
The initial optimism surrounding AI, while tempered by the realities of the challenges involved, never completely disappeared. The dream of creating machines that could think, learn, and solve problems like humans continued to inspire researchers, and the seeds planted by the early pioneers would eventually blossom into the vibrant and rapidly evolving field of AI that we see today. They never gave up on their dream, and ultimately they were vindicated.
CHAPTER TWO: Deep Learning Revolution: Unveiling the Power of Neural Networks
The "AI winters" of the late 20th century were, in part, a consequence of the limitations of the early approaches to artificial intelligence. Expert systems, while useful in specific domains, proved brittle and difficult to scale. Rule-based systems struggled to handle the complexity and ambiguity of real-world problems. The initial excitement surrounding artificial neural networks had waned as researchers encountered the difficulties of training large networks with the computational resources available at the time. Something had to change.
However, a small group of researchers persevered, believing that neural networks held the key to unlocking true artificial intelligence. They continued to refine the algorithms and explore new network architectures, laying the groundwork for a resurgence of interest in neural networks that would eventually lead to the deep learning revolution. These researchers often had to battle against the prevailing mood. Funding was hard to come by. They had to use their personal credit cards to purchase equipment.
One of the key breakthroughs that paved the way for deep learning was the development of more effective training algorithms. Backpropagation, an algorithm for adjusting the weights in a neural network based on the error in its output, had been proposed in the 1970s, but it was not widely adopted until the 1980s. The work of researchers like Geoffrey Hinton, David Rumelhart, and Ronald Williams in popularizing backpropagation was crucial in making it a practical tool for training neural networks.
Backpropagation works by calculating the gradient of the error function with respect to the weights in the network. This gradient indicates the direction in which the weights should be adjusted to reduce the error. By iteratively adjusting the weights in this way, the network can learn to map inputs to outputs more accurately. It was discovered that the error could flow backwards. This was key to improving performance. It was like the network could retrace its steps.
Another important development was the increasing availability of large datasets. Neural networks, particularly deep neural networks with many layers, require vast amounts of data to train effectively. The rise of the internet and the digitization of information provided a wealth of data that could be used to train these networks. This data, combined with more powerful computers, allowed researchers to build and train much larger and more complex networks than had previously been possible.
The combination of improved training algorithms, larger datasets, and more powerful hardware led to a series of breakthroughs in the early 2000s. In 2006, Geoffrey Hinton and his colleagues at the University of Toronto demonstrated that deep neural networks could be trained to achieve state-of-the-art results on a variety of tasks, including image recognition and speech recognition. This marked a turning point in the field of AI, sparking a renewed wave of interest in neural networks and deep learning.
One of the key insights that emerged from this work was the importance of unsupervised pre-training. Hinton and his colleagues found that they could train deep neural networks more effectively by first training them on a large dataset of unlabeled data, using an unsupervised learning algorithm. This pre-training step helped the network learn useful features from the data, which could then be fine-tuned using a smaller dataset of labeled data. This approach was particularly effective for tasks where labeled data was scarce.
This period also saw the development of new network architectures that were better suited for specific types of data and tasks. Convolutional Neural Networks (CNNs), inspired by the structure of the visual cortex in the brain, proved particularly effective for image recognition. CNNs use convolutional layers to extract features from images, such as edges and corners, and pooling layers to reduce the dimensionality of the data. These networks are designed to be translation-invariant, meaning they can recognize objects regardless of their position in the image.
Recurrent Neural Networks (RNNs), on the other hand, are designed to process sequential data, such as text or speech. RNNs have feedback connections that allow them to maintain a "memory" of previous inputs, making them well-suited for tasks that require understanding context. A variation of RNNs, called Long Short-Term Memory (LSTM) networks, were developed to address the problem of vanishing gradients, which made it difficult to train RNNs on long sequences.
The breakthrough that truly catapulted deep learning into the public consciousness was the 2012 ImageNet Large Scale Visual Recognition Challenge. A team from the University of Toronto, led by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, used a deep convolutional neural network, called AlexNet, to achieve a dramatic improvement in image recognition accuracy compared to previous methods. AlexNet's success was a watershed moment for deep learning, demonstrating its potential to surpass human performance on complex tasks.
AlexNet was significantly larger and deeper than previous convolutional neural networks, with 60 million parameters and eight layers. It was trained on a dataset of 1.2 million images, using two powerful graphics processing units (GPUs). The use of GPUs, which are designed for parallel processing, was crucial in enabling the training of such a large network. It was a huge undertaking. The results spoke for themselves. The world took note.
The success of AlexNet sparked a surge of interest in deep learning, both in academia and industry. Companies like Google, Facebook, and Microsoft began investing heavily in deep learning research and development, and deep learning techniques were rapidly applied to a wide range of problems, from natural language processing and machine translation to robotics and self-driving cars. The term, deep learning, became common parlance. The AI winter was finally over.
The following years saw a continuous stream of advancements in deep learning, with new network architectures, training algorithms, and applications emerging at a rapid pace. Generative Adversarial Networks (GANs), introduced by Ian Goodfellow and his colleagues in 2014, are a particularly interesting development. GANs consist of two networks, a generator and a discriminator, that are trained simultaneously in a competitive game. The generator tries to create realistic images, while the discriminator tries to distinguish between real and generated images. This adversarial training process leads to the generation of increasingly realistic images.
GANs have been used to create stunningly realistic images of faces, objects, and scenes. They have also been applied to tasks like image editing, style transfer, and even drug discovery. The potential applications of GANs are vast, and they are a subject of ongoing research and development. It is like teaching a computer to imagine. The results are often surreal.
Deep reinforcement learning is another area of deep learning that has seen significant progress. Reinforcement learning is a type of machine learning in which an agent learns to make decisions by interacting with an environment and receiving rewards or penalties. Deep reinforcement learning combines reinforcement learning with deep neural networks, allowing agents to learn complex behaviors from raw sensory input. It had been difficult to get working properly.
Deep reinforcement learning achieved a major breakthrough in 2016 when AlphaGo, a program developed by Google DeepMind, defeated a world champion Go player. Go, an ancient Chinese board game, is considered to be far more complex than chess, with a vast number of possible moves. AlphaGo's victory was a landmark achievement, demonstrating the power of deep reinforcement learning to master complex tasks that require strategic thinking.
The advancements in deep learning have been driven not only by algorithmic innovations but also by the development of specialized hardware. GPUs, originally designed for graphics rendering, have proven to be ideally suited for the parallel processing required by deep neural networks. More recently, companies like Google have developed custom hardware, such as Tensor Processing Units (TPUs), specifically designed for deep learning workloads.
The deep learning revolution has transformed the field of artificial intelligence, leading to breakthroughs in a wide range of applications. From image recognition and natural language processing to robotics and game playing, deep learning has demonstrated its ability to surpass human performance on many complex tasks. The progress has been, if anything, faster than expected. The capabilities have often surprised the developers.
However, deep learning is not without its limitations and challenges. Deep neural networks are often criticized for being "black boxes," meaning their internal workings are opaque and difficult to interpret. This lack of transparency makes it challenging to understand why a network makes a particular decision, which can be a concern in applications where accountability and trust are important. Attempts are being made to open this box.
Another challenge is the need for vast amounts of labeled data to train deep neural networks. While unsupervised learning techniques have made progress, many applications still require large datasets of labeled examples, which can be expensive and time-consuming to acquire. This is a bottleneck for many projects. The search for more data continues.
Furthermore, deep neural networks can be vulnerable to adversarial attacks, in which small, carefully crafted perturbations to the input can cause the network to make incorrect predictions. These vulnerabilities raise concerns about the security and reliability of deep learning systems, particularly in safety-critical applications. Research is ongoing. The risks are being carefully considered.
Despite these challenges, the deep learning revolution continues to advance at a breathtaking pace. New architectures, training algorithms, and applications are constantly being developed, pushing the boundaries of what's possible with AI. The field is dynamic and ever-evolving, driven by the collaborative efforts of researchers, engineers, and entrepreneurs around the world. They are building on the past. They are pushing into the future. The impact of this revolution will only deepen.
CHAPTER THREE: AI in Healthcare: Transforming Diagnosis, Treatment, and Patient Care
Artificial intelligence is rapidly transforming healthcare, moving from the realm of science fiction to a tangible reality that is improving diagnoses, personalizing treatments, and enhancing patient care. The convergence of powerful algorithms, vast datasets of medical information, and increasing computational power is creating unprecedented opportunities to address some of the most pressing challenges in medicine. It's not about replacing doctors; it's about augmenting their capabilities and empowering them with tools that can help them make better decisions, faster. It is like giving every doctor a superpowered assistant.
One of the most promising areas of AI application in healthcare is medical imaging analysis. Deep learning algorithms, particularly convolutional neural networks (CNNs), have demonstrated remarkable accuracy in detecting and diagnosing diseases from medical images such as X-rays, CT scans, and MRIs. These algorithms can identify subtle patterns and anomalies that might be missed by the human eye, leading to earlier and more accurate diagnoses. This can be the difference between life and death. Spotting a tiny tumor early can make treatment much easier.
For example, AI-powered systems are being used to detect early signs of breast cancer in mammograms, lung cancer in CT scans, and diabetic retinopathy in retinal images. These systems can analyze images much faster than human radiologists, reducing the workload and allowing doctors to focus on more complex cases. They can also provide a "second opinion," helping to reduce diagnostic errors and improve patient outcomes. The aim is to free up doctors' time.
Beyond image analysis, AI is also being used to develop new diagnostic tools based on other types of medical data. Natural language processing (NLP) algorithms can analyze electronic health records (EHRs), extracting relevant information about a patient's medical history, symptoms, and lab results. This information can be used to identify patients at risk of developing certain conditions, such as heart disease or diabetes, or to predict the likelihood of a patient experiencing complications after surgery.
NLP is also being used to develop virtual assistants that can interact with patients, answering their questions, providing them with information about their conditions, and reminding them to take their medications. These virtual assistants can help to improve patient engagement and adherence to treatment plans. They can also free up healthcare professionals' time, allowing them to focus on more complex patient interactions. They can be a valuable tool for managing chronic conditions.
Another major area of AI application in healthcare is personalized medicine. Every patient is unique, with their own genetic makeup, lifestyle, and environmental factors that influence their health. AI algorithms can analyze vast amounts of patient data to identify patterns and predict how individual patients will respond to different treatments. This allows doctors to tailor treatments to the specific needs of each patient, maximizing effectiveness and minimizing side effects. One size does not fit all.
AI is being used to develop personalized treatment plans for a variety of conditions, including cancer, heart disease, and mental health disorders. For example, in cancer treatment, AI algorithms can analyze a patient's tumor genome to identify genetic mutations that are driving the cancer's growth. This information can be used to select targeted therapies that are most likely to be effective against that particular type of cancer. This is a much more precise approach.
AI is also playing a crucial role in drug discovery and development. The process of developing new drugs is traditionally long, expensive, and often unsuccessful. AI algorithms can accelerate this process by analyzing vast amounts of biological data to identify promising drug candidates, predict their efficacy and safety, and optimize their design. This can significantly reduce the time and cost of bringing new drugs to market. It could revolutionize the pharmaceutical industry.
For example, AI is being used to screen libraries of millions of molecules to identify compounds that are likely to bind to a specific target protein involved in a disease. It can also be used to predict how a drug will be absorbed, distributed, metabolized, and excreted by the body, helping to identify potential side effects before a drug enters clinical trials. This helps to filter out the duds. It saves time and money.
AI is also being used to design new drugs from scratch. Generative adversarial networks (GANs), a type of deep learning algorithm, can be trained to generate new molecules with desired properties, such as high binding affinity to a target protein and low toxicity. This opens up exciting possibilities for creating entirely new classes of drugs that could treat diseases that are currently untreatable. It is like teaching a computer to be a chemist.
Beyond diagnosis and treatment, AI is also being used to improve the efficiency and effectiveness of healthcare operations. AI-powered systems can automate administrative tasks, such as scheduling appointments, processing insurance claims, and managing patient records. This can free up healthcare staff to focus on patient care and reduce administrative overhead. They take care of the boring stuff.
AI can also be used to optimize hospital workflows, such as predicting patient flow, managing bed capacity, and allocating resources. This can help to reduce wait times, improve patient satisfaction, and ensure that resources are used efficiently. Smart hospitals are becoming a reality.
The use of AI in robotic surgery is another exciting development. Robotic surgery systems, controlled by surgeons, offer enhanced precision, flexibility, and control during surgical procedures. AI can augment these systems by providing real-time guidance, assisting with complex maneuvers, and even automating certain aspects of the surgery. This can lead to less invasive procedures, faster recovery times, and improved surgical outcomes. The surgeon remains in charge, with AI as a powerful tool.
However, the implementation of AI in healthcare also raises important ethical and practical considerations. One of the key concerns is data privacy and security. Patient data is highly sensitive, and it is crucial to ensure that it is protected from unauthorized access and misuse. Strong data governance frameworks and robust security measures are essential. The stakes are very high.
Another concern is the potential for bias in AI algorithms. If AI systems are trained on biased data, they may perpetuate and even amplify existing health disparities. It is important to ensure that AI systems are trained on diverse and representative datasets and that they are regularly evaluated for bias. Fairness and equity must be priorities.
The "black box" nature of some AI algorithms, particularly deep neural networks, also raises concerns about transparency and accountability. It can be difficult to understand why an AI system makes a particular decision, which can be a challenge in healthcare, where decisions can have life-or-death consequences. Explainable AI (XAI) is an emerging field that aims to develop AI systems that are more transparent and interpretable.
The integration of AI into healthcare also requires careful consideration of the human element. It is important to ensure that healthcare professionals are properly trained to use AI tools and that they understand their limitations. AI should be seen as a tool to augment human capabilities, not to replace them. The human touch remains essential in healthcare.
Despite these challenges, the potential benefits of AI in healthcare are immense. AI has the power to transform healthcare, making it more accurate, personalized, efficient, and accessible. As AI technology continues to advance, it is likely to play an increasingly important role in improving the health and well-being of people around the world. It is a rapidly evolving field, with new breakthroughs happening all the time. It is not a panacea, but it is a powerful tool that can make a real difference. The future of healthcare is intertwined with the future of AI. The possibilities are both exciting and daunting.
This is a sample preview. The complete book contains 27 sections.