Minimalism in Programming: How Complexity Harms Your Productivity - 29 minutes read


This article is based on a book chapter from my upcoming book “From One to Zero: A Minimalistic Approach to Programming”.

My programming students often write in with their struggles and failures. Many students ultimately overcome their struggles—but a large percentage of them give up their programming ambitions after realizing how hard creating software can be. These students started with the goal of becoming professional coders, but, ultimately, they missed that target.

After thousands of personal conversations with these students, it became apparent that many new coders don’t fail because they don’t know one or the other Python feature or because they lack technical skills, intelligence, or even talent.

These are not the underlying reasons why they fail.

Instead, they fail because they are overwhelmed by the complexity lurking everywhere in programming. The complexity causes them to throw in the towel. This is unfortunate because there are many ways to mitigate the harmful effects of complexity in programming. In the previous chapter, you’ve already learned some strategies about the 80/20 principle (Focus on the vital few and sacrifice the trivial many!).

In this chapter, we’re going to have a comprehensive look at this important and highly underexplored topic. What exactly is complexity? Where does it occur? How does it look like?

Let’s start with a quick overview—there’s significant complexity in selecting the right

programming language among dozens of popular languages,

coding project to work on—from thousands of open-source projects and myriads of problems,

libraries within a language (scikit-learn vs NumPy vs TensorFlow),

emerging technologies to “bet on”—Alexa apps, smartphone apps, browser-based web apps, integrated Facebook or WeChat apps, virtual reality apps—and

coding editor such as PyCharm, IDLE, and Atom.

Given the great confusion caused by these sources of complexity, it’s no surprise that “How to start?” is one of the most common questions from programming beginners.

To answer the question right away, the best way to start is not by choosing a programming book and reading over all syntactical features of the programming language. Surprisingly, these coding books sell well—even I am a seller of such books. However, interacting with thousands of programming students personally I realized that many ambitious students buy programming books as a commitment device to put the learning task on their ToDo lists—if they’ve spent money on the book, they better read it or the investment will be lost. But as so many other tasks on their ToDo lists, reading a programming book is seldomly one to be completed.

Many students buy these programming tutorial books but very few actually read them.

So, what is the best way to start to learn to program? In my opinion, the best way to start is to choose a practical code project—a simple one if you’re a beginner—and push it to completion.

Don’t read coding books before you do this.

Don’t read random tutorials on the web.

Don’t scroll through endless feeds on StackOverflow.

Just set up the project and start coding with the limited skills you have and your common sense.

It’s okay if you don’t understand what you’re doing, you will gradually increase your understanding. You read books and articles only to make progress on the project in front of you. By diving into the process of finishing your first project, you need to solve a number of highly relevant problems:

By answering these questions, you gradually build a well-rounded skill set of a practitioner. Over time, you’ll answer these questions better and better. Your speed and skill to solve these problems will grow. You’ll be able to solve similar problems much bigger and you’ll create your internal database of programming patterns and conceptual insights. Even advanced coders learn and improve with the exact same process—only the coding projects have become much larger and more complicated.

Let’s assume you adopt this project-based learning approach. You focus on a single project and work on it for a considerable amount of time. What is your biggest enemy now? You guessed it: complexity.

You’ll struggle with complexity in:

finding bugs in ever-growing codebases,

understanding code components and how they interact,

choosing the right feature to be implemented next,

understanding the mathematical and conceptual basics of the code.

Complexity is everywhere, at every stage of a project that comes to life. And the hidden costs of this complexity are very tangible: coders who are just starting out throw in the towel and the projects never see the light of day. The beginner argues: “coding is too difficult for me” and he truly believes it—even though nothing can be further from the truth.

The root of the problem is overwhelming complexity and a lack of focus. So, the question arises:

How to solve complexity and a lack of focus?

The answer is straightforward, and I’ve already stressed it a few times in this book: minimalism. Seek simplicity and focus – in every stage of the coding cycle. I want you to take this one concept out of the book: Take a radically minimalistic position in every area you’ll encounter in the programming space. If this book can convince you to take more extreme measures to increase your focus, it has accomplished its mission!

Let’s dive deeper into the concept of complexity to develop an understanding of one of the great enemies of your coding productivity.

What is Complexity?

In different fields, the term complexity comes with different meanings. Sometimes, it’s strictly defined, such as in computational complexity of a computer program that provides a means to analyze a given code function for varying inputs. Other times, it’s loosely defined as the amount or structure of interactions between system components. But in this book, we’re going to use it in a more generic way.

The Merriam Webster dictionary defines complexity as “something complex”. The term complex is defined as “a whole made up of complicated […] parts”. If you resolve the term complicated—“difficult to analyze, understand, or explain”—you end up with the following rough definition:

Complexity: “a whole, made up of parts, that is difficult to analyze, understand, or explain”.

This is how we use the term complexity in this book. Complexity describes a whole system or entity. It is difficult to explain or describe. Because of its difficulty, complexity causes struggle and confusion. When confronted with complexity, people find themselves cognitively unable to comprehend the deeper meaning, implications, or effects of “the whole”.

They cannot see the big picture—complexity is the enemy of clarity, closure, and predictability, because a complex system behaves in highly unpredictable ways. Where do you find complexity? You’ll find it everywhere, because real-world systems are messy: A highly interrelated web of causes and effects that obfuscates the behavior of a real system, and that is impossible to decode for individuals who are themselves caught in this complex web. Like a differential equation, the output of one system feeds into another system’s input which, in turn, feeds back into the first system as an input. Examples of highly complex systems are the stock market, social trends, emerging political viewpoints, and big computer programs with hundreds of thousands of lines of code—such as the Windows operating system.

If you are a coder, you are especially prone to overwhelming complexity. Let’s dive into different sources of complexity in the field of programming:

Complexity in a Project Lifecycle

Complexity in Software and Algorithmic Theory

Complexity in Learning

Complexity in Processes

Complexity in Social Networks

Complexity in Your Daily Life

Complexity in a Project Lifecycle

The best way to learn and create lasting value is through your participation or initiation of a real-world project. But how does it look like when a real-world project comes to life? Let’s dive into the different stages of the project lifecycle: Planning, Defining, Designing, Building, Testing, and Deployment (see Figure 1).

Figure 1: A software project comes to life – the project lifecycle consists of six conceptual phases: Planning, Defining, Designing, Building, Testing, Deployment.

Figure 1 shows the software development life cycle consisting of six phases. Even if you’re working on a very small software project, you’re likely going through all six phases of the software development lifecycle. Next, you’ll quickly dive into all six phases—and how complexity has a significant impact on every one of them.

Planning

The first stage of the software development life cycle is the planning phase. From software engineering literature, you may know this as requirement analysis. The purpose of this phase is to determine how the end product will look like. A successful planning phase leads to a strictly defined set of required features to deliver to the customer or the end user.

The planning phase solves a multi-dimensional problem where different departments and functions must collaborate to determine the optimal set of features of the software. A number of factors must be taken into consideration: the costs of building a feature, the risk of not being able to successfully implement the feature, the expected value for the end user, marketing and sales implications, maintainability, scalability, legal restrictions and many more.

This phase is crucial because it can save you from massive wastages of downstream energy in the following phases. Business owners know that capital allocation (or generally-speaking: resource allocation) is the single most important function of a CEO. The planning phase is where leverage plays out to its fullest extent: Planning mistakes can lead to millions of dollars’ worth of wasted resources. On the other hand, careful planning has the power of setting the business up for great success in the years to follow. The planning phase is a great leverage point where you can apply your newly acquired skill of 80/20 thinking.

However, the planning phase is also very difficult to do right.

Why? Because of our major enemy that is lurking everywhere: complexity. It is complicated to assess risk in advance properly. It’s equally complicated to figure out the strategic direction of a company or an organization. It’s complicated to guess the customers’ responses to a software project. It’s complicated to weigh the positive impact of different feature candidates—the features that are considered for inclusion. And it’s complicated to determine the legal implications of a given software feature. Taken together, the sheer complexity of solving this multi-dimensional problem is killing us.

Defining

Compared to the previous phase, this phase is relatively simple. The defining phase consists of translating the results of the previous phase (requirements) into properly specified software requirements. In other words, it formalizes the output of the previous phase to gain approval or feedback from clients and end users who will later use the product.

Designing

The goal of the designing phase is to draft the architecture of the system, decide on the modules and components that deliver the defined functionality, and design the user interface – keeping the requirements in mind that were developed in the previous two phases. The gold standard of the designing phase is to create a crystal-clear picture on how the final software product will look like and how it is built.

But the devil lies in the detail! A great system designer must know about the pros and cons of various software tools to build the system in the most efficient manner. For example, some libraries may be very easy to use by the programmer but slow in execution speed. Building own libraries is harder for the programmers but may result in much higher speed and, consequently, improved usability of the final software product. The designing phase must fix these variables so that the benefit/costs ratio is maximized—for the specific definitions of costs and benefits in your organization.

Roughly speaking, you want maximum bang for your buck.

Building

This is where many coders want to spend all their time in. The building phase is where the transformation from the architectural draft to the software product happens. Here, your ideas transform into tangible results—it feels satisfying to see your brainchild come to life.

Through proper preparation in the previous phases, a lot of complexity has already been eliminated. For example, the builders know which features to implement from all the possible features. They know how the features look like and which tools to use to implement them.

Yet, the building phase is always full of new and emerging problems. Unexpected things happen that slow down the progress such as bugs in external libraries, performance issues, corrupted data, human mistakes, and many more. Building a software product is a highly complicated endeavor. To write great software, you must use an artificial language and properly explain to dumb machines what to do in every possible circumstance. A small spelling mistake (=bug) can decide over the correctness and viability of the whole software product.

Testing

Congratulations, you’ve implemented all requested features and the program seems to work.

Are you done? Not quite!

You still must test the behavior of your software product for different user inputs and usage patterns. It seems like a minor detail, but this phase is often the most important of all!

In fact, it’s so important that many practitioners now advocate the use of test-driven development where you don’t even start to implement (in the previous phase) without having written all tests. While you can argue against that point of view – I haven’t seen a way to rigorously deploy test-driven development in practice – it’s generally a good idea to spend lots of time to think of different ways to test your product by creating test cases and checking if the software delivers the correct result for these test cases.

For example, if you implement a self-driving car, you must write so-called unit tests to check if each little function (a unit) in your code generates the desired output for a given input. This will usually uncover some faulty functions that behave strangely under certain (extreme) inputs. But even if all your unit tests successfully pass, you haven’t yet completed the testing phase. You must test the correct interaction of the units as they’re building a greater whole. You must design real-world tests, drive the car for thousands or even tens of thousands of miles to uncover strange behavior patterns under strange and unpredictable situations.

What if your car drives on a small road without road signs?

What if the car in front of you abruptly stops?

What if multiple cars wait for each other on a crossroad?

What if the driver suddenly steers into approaching traffic?

There are so many tests to consider – the complexity is so high that many people throw in the towel here. What looked good in theory, even after your first implementation, often fails in practice after applying different levels of software testing such as unit tests or real-world usage tests.

Deployment

Your software has passed the rigorous testing phase. Now, it’s time to release it and throw it into the marketplace.

This phase requires you to launch the product, create marketing campaigns, talk to early users of the product, fix new bugs that will surely come to light after being exposed to users, orchestrating the deployment of the software on different operating systems, supporting and troubleshooting different kinds of problems, and maintaining the code base to adapt and improve over time. Needless to say that this phase can become quite messy, given the complexity and interdependency of the various design choices of your product (like software libraries used, computing power required, usage patterns assumed).

Are you already discouraged? Good – now you know the enemy. But stay with us because there’s a solution in this book!

Complexity in Software and Algorithmic Theory

While there’s significant complexity in the whole process around software development, there’s as much complexity within a given piece of software. Complexity in software—how is it defined? You may already have an intuition regarding complexity in software products (“Wow – how did they implement this face detection feature! It must have been really complicated!”).

But there are many metrics in software engineering that measure the complexity of software in a more formal way.

For instance, there’s the precisely defined term of algorithmic complexity. This has nothing to do with intuition about how easily you can understand a given code piece. Instead, algorithmic complexity talks about the resource requirements of different algorithms. It allows you to compare different algorithms that solve the same problem. For example, say you’ve implemented a game app with a high-score rating system. You want the players with the highest score to appear at the top of the list, and the players with the lowest score at the bottom.

In other words, you need to sort the list. Sorting a list is more complicated for 1,000,000 than it is for 100 players. There are thousands of algorithms to sort a list. Some algorithms scale better with an increasing size of the list input; others scale worse. As long as you’re app serves a few hundred users, it doesn’t really matter which algorithm you choose. But as your user base grows, the runtime complexity of the list grows super-linearly. Soon, your users will have to wait longer and longer for the list to be sorted. They start complaining—you need better algorithms!

Figure 2 exemplifies the algorithmic complexity of two schematic algorithms. On the x-axis, it shows the size of the list to be sorted. The y-axis shows the runtime of the algorithm (in time units). Algorithm 1 is much slower than Algorithm 2. In fact, the inefficiency of Algorithm 1 becomes more and more apparent, the more list elements must be sorted. As a result, your game app becomes slower and slower, the more users play with it.

Figure 2: Algorithmic complexity of two different sorting algorithms.

That’s why algorithmic complexity is a thoroughly researched field with decades of progress and myriads of computer scientists who continuously reduce the algorithmic complexity of algorithms to solve the same problems quicker and quicker. In my opinion, the algorithms produced in decades of computer science research are among the most valuable technological assets of humanity. Those algorithms allow us to solve the same problems with fewer resources, not once, but over and over. We truly stand on the shoulders of giants.

Interestingly, algorithmic complexity is not the only metric that measures complexity of code. There are also some practical measures such as Cyclomatic Complexity, a metric developed by Thomas McCabe in 1976 that describes the number of linearly independent paths through your code: the number of paths that have at least one edge that’s not in another path. For example, code with an if statement would result in two independent paths through your code, so it would have a higher cyclomatic complexity than a flat code without an if statement. The cyclomatic complexity is a solid measure of the cognitive complexity of many code bases.

However, this measure of complexity ignores the cognitive complexity that comes from, say, nesting different for loops compared to a flat for loop. There are other measures that improve upon this (such as NPath complexity). The important take-away, however, is that complexity is such a big factor in both algorithmic theory and implementation practice that it has been thoroughly researched for decades by hundreds of thousands of researchers. The goal of all these efforts is to reduce complexity to mitigate its harmful effects on productivity and resource utilization of humans and machines alike.

Complexity in Learning

The world’s information can be modeled as a huge network of interrelated chunks of information—and no chunk of information is independent of other chunks.

In the year 2012, Google announced to populate a data structure called the “Knowledge Graph” with information. The knowledge graph represents information in a network-like structure—which, instead of storing dumb and independent facts, maintained an interrelationship between different facts and pieces of information.

The Google search engine then used this interdependency of facts to enrich the search results with higher-level knowledge and to create many answers autonomously.

Example: small part of the knowledge graph may be about the famous computer scientist Alan Turing. In the knowledge graph, the concept of Alan Turing is connected to different pieces of information such as his birth year (1912), his field of study (computer science, philosophy, language theory), and his doctoral advisor (Alonzo Church). Each of those pieces of information is also connected to other facts (e.g., Alonzo Church’s field of study was computer science as well), forming a massive network of interrelated facts. You can use this network to acquire new information and answer user queries programmatically. For example, a query about the “field of study of Turing’s doctor father” would result in the deducted answer “computer science”. While this may sound trivial to you, generating new factoids like these lead to a breakthrough in information retrieval and search engine relevancy. You’d probably agree that it’s far more effective to learn by association than by remembering unrelated facts. You can easily remember a story, but you struggle remembering the phone number of your mother!

Any field of study explores and tries to extend the knowledge graph of humanity with more facts and observations. But any field of study focuses only on a small part of the graph—each consisting of myriads of interrelated factoids. You cannot really say that you’ve understood a fact, if you haven’t already understood all related facts. How can you possibly know Alan Turing without knowing about the believes, philosophies, and characteristics of his doctorate advisor Church? And how can you understand Church without also understand Turing?

If you study any field, you must accumulate knowledge about a great number of interrelated facts. Over time, you’ll understand more than you understood before—but you’ll never understand everything. Knowledge is far too complex. There are too many dependencies and facts in the graph. You can learn more but the more you learn, the more you become aware of the limitations of your knowledge. It is obvious that complexity poses the most fundamental boundaries to your ambitions to learn. Learning and complexity are two sides of the same coin: complexity is at the boundary of the knowledge you’ve already acquired. To learn more, you must first know how to control complexity.

We’re getting kind of abstract here, so let’s have a practical example! Say you want to program a trading bot that buys and sells assets according to a set of sophisticated rules. There’s lots of useful knowledge that you could learn before starting your project. You could learn about concepts such as the basics of programming, distributed systems, databases, application programming interfaces (APIs), web services, machine learning, data science, maths and many more.

You could learn about practical tools such as Python, NumPy, Scikit-learn, CCXT, TensorFlow, and Flask. You could learn about trading strategies and stock market philosophies. And, in fact, many people approach these problems with such a mindset. They never stop learning. They never feel ready to start the project. They keep learning because the more they learn, the less they feel knowledgeable. Each of the mentioned categories will demand thousands of hours of study to master it. You’ll never attain sufficient mastery in all those fields to truly satisfy your desire to feel prepared. Frustrated and overwhelmed by the complexity of the whole endeavor, you feel like throwing in the towel and leave the field to more intelligent coders. Complexity is about to take its next victim: you.

Fortunately, because you’ve read this book, you’ve attained the only effective weapon against complexity. This weapon has many names: focus, simplification, scaling down, reducing, minimalism. It’s how you obtain success: you focus on mastering one area—and forgetting about anything else. You reduce complexity with minimalism. You think first-principles and forget about the noise. You create one program, not many. Philosophically, you go for one full life and don’t try to live one hundred partial ones.

Complexity in Social Networks

Complexity appears wherever there’s interaction between many different components. Social networks are a prime example. If you’re working in a large organization, you know that hundreds or even thousands of people must work together towards shared goals. At the same time, thousands of people also have thousands of different opinions and perspectives on the same matters.

Attaining the same goal can be a stretch for one person and a breeze for another. Writing 100 lines of code may take one programmer a week and another one a day. A third programmer writes 200 lines of code in half an hour, delivering higher quality than both. Moreover, some people work well in teams, others work better alone.

Complexity is an integral part of social networks. But in what way is this relevant for a tech book like this? Many talented software engineers end up as team leaders. (You get promoted until you reach your level of incompetency.) They now leave the simple and clean world of software, speaking comparatively, and enter the world of social interactions, office politics, leadership, and team motivation. Naturally, they use the tools they’ve learned in programming and apply them to their new tasks of leading a team. They try to program every team member with detailed activity logs and rigid structures—and, as a result, end up micromanaging people as they’ve micromanaged dumb machines. Yet, this clearly doesn’t scale and it doesn’t keep morale up!

If you find yourself as a team leader, start to take yourself back. Forget your rule books and algorithms. Instead, trust and motivate, be empathetic and listen, defend your team members against distractions and harmful influences while taking your own interests back to nurture your team, and create a healthy and functioning microorganism with its own intelligence, culture, and dynamics in solving problem as they arise.

Effective social interaction is a highly multi-dimensional problem and there are exceptions to every rule you could formulate for social situations. Many coders cannot handle it: they would rather master the complexity of computers with rules and algorithms, than embracing the messy complexity of human relationships.

Are there any further sources of complexity? You bet there are!

Complexity in Processes

A process is a series of actions that you, a group of people, or even a machine takes with the goal of realizing a defined result.

The complexity of a process is given by its number of actions, participants, or branches. In general, if you have more actions, the process becomes more complicated (see Figure 3).

Figure 3: Two example processes: one-person development vs team development.

A lot of processes have been developed and many software companies follow process models to develop software (e.g., agile development, scrum), maintain customer relationship (e.g., CRM, sales scripts), and create new products and business models (e.g., business model canvas). If people are the lifeblood of a business, processes are the veins and arteries. If your arteries are clocked, you need to get rid of the bottlenecks quickly or your body will suffer. It’s the same for a business: if the business suffers, you must remove bottlenecks in your business processes to stay alive.

In an organization, it’s often easier to establish a new process than to change an existing one because of the stakeholders of established processes. Try to shut down an unprofitable department in your organization and you’ll quickly learn this lesson. Many organizations deteriorate into bureaucracies through the continuous accumulation of more and more processes. Complexity starts to accumulate in the system. Innovation finds less and less vehicles for change—it can’t break through the complexity. Resources are wasted and an increasing number of actions within processes become redundant or even unnecessary. Trying to fix the suffering business, management invests energy to establish new processes and new actions and the vicious cycle begins to destroy the business or organization.

Complexity is the enemy of efficiency. If you want to build an efficient organization, or if you want to become an efficient individual, radically weed out unnecessary steps, actions, and processes. Keep your house clean and engage in radical minimalism! Don’t worry, you most likely won’t reach the point where you have weeded out too much. There’s no point in telling yourself that complexity in processes is useful and things can be oversimplified. This is like telling an obese society that they must ensure to eat enough sugar to feed their bodies with energy. Yes, but no – an increase in complexity and sugar consumption is built into the modern world and takes care of themselves!

Complexity in Your Daily Life or the Death of a Thousand Cuts

The purpose of this book is to increase the productivity of your programming efforts. To become a more efficient programmer, you must tackle the daily distractions and the constant competition for your valuable time. There’s a never-ending stream of external distractions that keep you from doing the deep work required for programming. Computer science professor Cal Newport talks about this in his excellent book Deep Work (a recommended read). He argues that there’s both an increasing demand for work that requires deep thinking—programming, researching, medicine, writing—and a decreasing supply due to the proliferation of communication devices, opportunities, and entertainment systems. If increasing demand meets decreasing supply, economic theory suggests that prices will rise. If you’re capable of engaging in deep work, your economic value will increase and your income will rise.

In fact, you can observe this trend in the programming industry: the average freelance developer earns $61 per hour. There never has been a better time for programmers who can engage in deep work. For instance, highly skilled machine learning engineers demand multiples of hundreds of thousands of dollars in yearly salaries.

Now, the caveat: it has become almost impossible to engage in deep work if you don’t brutally enforce it. The external world demands your attention. Your boss and your colleagues will regularly pop into your office. Your smartphone will demand your attention in, say, 20 minute intervals. Your email provider will present you with email after email—each asking for a slice of your time. To make matters worse, not only the external world plays against your plan to engage in deep work—your internal work does, too. Deep work results in delayed gratification. It’s a satisfying feeling to have spent weeks of your time to complete a computer program that works.

However, what you desire in most moments is instant gratification. If you’re about to dive into deep work, your subconsciousness realizes how much effort it is and looks for ways to escape from the effort engaging in deep work. You want to answer the message from your friend because it will produce an easy boost of your endorphins. You want to engage in meaningless chit chat in the coffee corner. You want to watch Netflix and TV. The promise of delayed gratification becomes less and less attractive compared to the happy, colorful and lively world of instant gratification.

Due to the abundant availability of external stimuli in today’s world, your efforts to become productive are likely to die the death of a thousand cuts. Yes, you can switch off your smartphone once and use willpower to not watch an episode of your preferred series today. But can you do it consistently day after day? And even if you could force yourself not to use social media, will you be able to resist the urge of answering all emails in your Inbox—and use this time to write on your first novel? There’s an important difference between being busy and being productive!

How to Handle a Complex World

I hope that by now you are properly motivated why you need to overcome complexity—this was the purpose of this whole chapter. You should now see how complexity is the enemy of the lean and efficient organization and individual. You simply cannot become productive without mastering tactics and strategies to handle complexity. A simple but efficient tactic to overcome complexity in social networks is to uninstall them from your smartphone (I won’t water down this recommendation only because some people assume it’s not possible—read Deep Work from Cal Newport if you want to learn how it can be done). A simple strategy to overcome complexity in business is to schedule a monthly session where you discuss with your teams tasks you can get rid of—even knowing that these tasks may provide some value to some people (think: opportunity costs, not absolute value).

Let’s recap some of the most important consequences of the preceding arguments:

Complexity harms productivity due to the loss of focus it incurs.

If you don’t take control and full responsibility for this problem, it will quickly consume your most precious resource: time.

At the end of your life, you won’t judge whether you’ve led a meaningful life based on how many emails you’ve replied to, how many hours of computer games you’ve played, or how many Sudokus you’ve solved.

In the previous chapter, you’ve learned about the power of the 80/20 principle. Focus on the vital few and ignore the trivial many. How is the 80/20 principle connected to complexity? This is an interesting question – both seem to be two sides of the same coin! In a way, complexity creates the opportunity of focusing on the vital few. If only the vital few existed, everyone would only focus on them and there wouldn’t be any opportunity to create more with less.

But fear not – this is not how the world works. Instead, you’ll have plenty of opportunities to achieve more with less because complexity is an integral part of modern life. In fact, with inflating opportunities to spend your time, a growing amount of educational and entertaining content, and proliferating opportunities to visit every place and connect to every person under the sun, you can be sure to face an increasing amount of complexity in the decades to come. However, by learning how to handle complexity, by keeping it simple, you’ll be able to fight complexity better than most people and you’ll create a powerful competitive advantage for yourself and your business.

As the 80/20 principle, complexity is everywhere, and you constantly need to think about how to reduce complexity and simplify things. Keep it simple, stupid!

In the next chapter, we’ll learn how to apply this idea of keeping it simple by radically reducing complexity in the software development cycle and the value creation process as a software developer: minimum viable products.

… to be continued …

This article is based on a draft of a new chapter of my upcoming book “From One to Zero: A Minimalistic Approach to Programming”. Stay tuned for launch updates!

Where to Go From Here?

Enough theory, let’s get some practice!

To become successful in coding, you need to get out there and solve real problems for real people. That’s how you can become a six-figure earner easily. And that’s how you polish the skills you really need in practice. After all, what’s the use of learning theory that nobody ever needs?

Practice projects is how you sharpen your saw in coding!

Do you want to become a code master by focusing on practical code projects that actually earn you money and solve problems for people?

Then become a Python freelance developer! It’s the best way of approaching the task of improving your Python skills—even if you are a complete beginner.

Join my free webinar “How to Build Your High-Income Skill Python” and watch how I grew my coding business online and how you can, too—from the comfort of your own home.

Join the free webinar now!

Do you want to develop the skills of a well-rounded Python professional—while getting paid in the process? Become a Python freelancer and order your book Leaving the Rat Race with Python on Amazon (Kindle/Print)!

Source: Finxter.com

Powered by NewsAPI.org