Secure Software doesn't develop itself.

The picture shows the top layer of the Linux kernel's API subsystems. Source: https://www.linux.org/attachments/kernel-jpeg.6497/

Author: René Pfeiffer Page 2 of 4

René Pfeiffer has a background in system administration, software development, physics, and teaching. He writes on this blog about all things code and code security.

Code, Development, Agile, and the Waterfall – Dynamics

The picture shows the waterfalls of Gullfoss under the snow in Iceland. Source: https://commons.wikimedia.org/wiki/File:Iceland_-_2017-02-22_-_Gullfoss_-_3684.jpgCode requires a process to create it. The collection of processes, tasks, requirements, and checks is called software development. The big question is how to do it right. Frankly, the answer to this question does not exist. First, not all code is equal. A web server, a filesystem, a database, and a kernel module for network communication contain distinct code, with only a few functions that can be shared. For adding secure coding practices, some attendees of my courses question the application of checklists and cleaning of suspicious data. Security is old-fashioned, because you have to think of risks, how to address them, and how to improve sections of your code that connect to the outside world. People like to term agile where small teams bathe in outbursts of creativity and sprint to implementing requested features. You can achieve anything you set your mind to. Tear down code, write it new, deliver the features. This is not how secure coding works, and this is not how your software development process should look like (regardless what type of paradigm you follow).

It is easy to drift into a rant about the agile manifesto. Condensing the entire development process into 68 words, all done during three days of skiing in Colorado, is bound to create very general statements whose implementation wildly differs. This is not the point I want to make. You can shorten secure coding to 10 to 13 principles. The SEI CERT secure coding documents feature a list with the top 10 methods. It’s still incomplete, and you still have to actually integrate security into your writing-code-process. So you can interpret secure coding as a manifesto, too. Neglecting the implementation has advantages. You can use secure coding with all existing and future programming languages. You can use it on all platforms, also current and yet to be invented. The principles are always true. Secure coding is a model that you can use to improve how your team creates, tests, and deploys code. This also means that adopting a security stance requires you to alter your toolbox. All of us have a favourite development environment. This is the first place where you can get started with secure coding. It’s not all about having the right plugins, but it is important to see what code does while it is being developed.

The title features the words agile and waterfall. Please do yourself a favour and stop thinking about buzzwords. It doesn’t matter how your development process produces code. It matters that the code has next to none security vulnerabilities, shows no undefined behaviour and cannot be abused by third parties. Secure code is possible with any development process provided you follow the principles. Use the principle’s freedoms to your advantage and integrate what works best.

CrowdStrike and how not to write OS Drivers

The image shows a screenshot of a null pointer execption in Microsoft Windows. Source: Zach VorhiesYesterday the CrowdStrike update disable thousands of servers and clients all across the world. The affected systems crashed when booting. A first analysis by Zach Vorhies (careful, the link goes to the right-wing social media network X) has some not very surprising news about the cause of the problem. Apparently, the system driver from CrowdStrike hit a null pointer access violation. Of course, people immediately started bashing C++, but this is too shallow. There are different layers where code is executed. The user space is usually a safe ground where you can use standard techniques of catching errors. Even a crash might be safer for user space applications than continuing and doing more harm. Once your code runs as a system driver, then you are part of the operating system and have to observe a couple of constraints. OS code can’t just exit or crash (even exception without the catch{} code count as a crash). So having a null situation in mission-critical code is something which should never happen. This should have been caught in the testing phase. Furthermore, Modern C++ has no use for null pointers. You must use smart pointers, and by doing that, you don’t need to handle null pointers. There is nothing more to it.

You cannot ignore certain error conditions when running within the operating system. Memory allocation, I/O errors, and everything concerning memory operations is critical. There must be checks in place, and there is no excuse for omitting these checks.

Finding 0-Days with Large Language Models exclusive-or Fuzzing

The picture shows all the different Python environments installed on a system. The graphical overviiew is very confusing. Source: https://xkcd.com/1987/If all you have is a Large Language Model (LLM), then you will apply it to all of your problems. People are now trying to find 0-days with the might of LLMs. While there is no surprise that this works, there is a better way of pushing your code to the limit. Just use random data! Someone coined the term fuzzing in 1988. People have been using defective punch cards as input for a while longer. With input filtering of data, you want to eliminate as much bias as possible. This is exactly why people create the input data using random data. Human testers think too much, too less, or are too constrained. (Pseudo-)Random number generators rarely have a bias. LLMs do. This means that the publication about finding 0-days by using LLMs should not be good news. Just like human Markov chains, LLMs only „look“ in a specific direction when creating input data. The model is the slave of vectors and the training data. The process might use the source code as an „inspiration“, but so does a compiler with a fuzzing engine. Understanding that LLMs do not possess any cognitive capabilities is the key point here. You cannot ask an LLM what it thinks of the code in combination with certain input data. You are basically using a fancy data generator that uses more energy and is too complex for the task at hand.

Comparing LLMs with fuzzing engines does not work well. Both approaches serve an original purpose. Always remember that the input data in security tests should push your filters to the limit and create a situation that you did not expect. Randomness will do this much more efficiently than a more complex algorithm. If you are fond of complexity or have too much powerful hardware at your hands, there are other things you can do with this.

URL Validation, Unit Tests and the Lost Constructor

I have some code that requests URLs, looks for session identifiers or tokens, extracts them, and calculates some indicators of randomness. The tool works, but I decided to add some unit tests in order to play with the Catch2 framework. Unit tests requires some easy to check conditions, so validating HTTP/HTTPS URLs sounds like a good idea to get started. The code uses the curl library for requests, so checking URLs can be done by libcurl or before feeding the URL string to it. Therefore I added some regular expressions. RFC 3986 has a very good description of Uniform Resource Identifiers (URIs). The full regular expression is quite large and match too many variations of URI strings. You can inspect it on the regex101 web site. Shortening the regex to matching URLs beginning with “http” or “https” requires to define what you want to match. Should there be only domain names? Are IP addresses allowed? If so, what about IPv4 and IPv4? Experimenting with the filter variations took a bit of time. The problem was that no regex was matching the pattern. Even patterns that worked fine in other programming languages did not work in the unit test code. The error was hidden in a constructor.

Class definitions in C++ often have multiple variations of constructors. The web interface code can create a default instance where you set the target URL later by using setters. You can also create instances with parameters such as the target or the number of requests. The initialisation code sits in one member function which also initialises the libcurl data structures. So the constructors look like this:

http::http() {
}

http::http( unsigned int nreq ) {
init_data_structures();
set_max_requests( nreq );
return;
}

The function init_data_structures() sets a flag that tells the instance if the libcurl subsystem is working or not. The first constructor does not call the function, so the flag is always false. The missing function call is hard to miss. The reason why the line was missing is that the code had a default constructor at first. The other constructors were added later, and the default constructor function was never used, because the test code never creates instances without an URL. This bring me back to the unit tests. The Catch2 framework does not need a full program code with a main() function. You can directly create instances in your test code snippets and use them. That’s why the error got noticed. Unit tests are not security tests. The missing initialisation function call is most probably not a security weakness, because the code does not run with the web request subsystem flag set to false. It’s still a good way to catch omissions or logic errors. So please do lots of unit tests all of the time.

Floating Point Data Types and Computations

The picture shows how real numbers fit into the IEEE 754 floating point data type representation. Source: https://en.wikibooks.org/wiki/Introduction_to_Numerical_Methods/Rounding_Off_ErrorsFloating point data types are available in most programming languages. C++ knows about float, double, and long double data types. Other programming languages feature longer (256 bit) and shorter (16 bit and lower) representations. All data types are specified in the IEEE Standard for Floating-Point Arithmetic (IEEE 754). IEEE 754 is the standard for all implementations. Hardware also supports storage and operations. Floating point data storage is usually used in numerical calculations. Since the use case is to represent real numbers, the accuracy is a problem. Mathematically there is an infinite amount of other real numbers between two arbitrary chosen real numbers. Computers are notoriously bad at storing an infinite amount of data. For the purposes of programming, this means that all choices for using any floating point data type have to deal with error conditions and how to handle them. Obvious errors include the division by zero. Less obvious conditions are rounding errors, special numbers (infinity, not a number, signed zeroes, subnormal numbers), and overflows.

Not all of the error conditions may pose a threat for your applications. It depends on what type of numerical calculations your code does or consumes. Comparisons have to be implemented in a thoughtful way. Test for equality may fail, because of rounding errors. Using the “real zero” can backfire. The C and C++ standard libraries supply you with a list of constants. Among them is the minimal difference that can be represented in a floating point data type. It is called the epsilon value. Epsilon (ε) is often used to denote very small values. cfloat or float.h defines FLT_EPSILON (for float), DBL_EPSILON (for double), and LDBL_EPSILON (for long double). Using this value as the smallest difference possible is usually a good idea. There is another method for finding neighbouring floating point numbers. C++11 has introduced functions to find the next neighbour value. The accuracy is determined by the unit of least precision (ULP). ULPs are defined by the value of the least significant bit. Using ULPs or the epsilon values is a different approach. ULP checking requires transformation of the values into integer registers. Both methods work well away from the zero. If you are near the zero value, then consider using multiples of the epsilon value as a comparison value.

There is another overlooked fact. The float data type has 32 bit of storage. This means you can use 4 billions different bit combinations, which is not a lot. Looping through all values and stress testing a numerical function can be done in minutes. There is a blog post covering this technique complete with example code.

I have compiled some useful resources for this topic.

Linking against the Threading Building Blocks (oneTBB) library with g++ and clang++

A couple of weeks ago, I created a repository of example code for C and C++ courses. The examples include a source file that uses the Threading Building Blocks (oneTBB) library. Since the examples are rather small, I included no Makefile. Instead, I wrote a Bash script using clang++ and a Ninja build file using g++. Strangely, the clang++ build worked, but the g++ version complained about symbols in the linking phase. The linking errors look horrible. Here is the top of the long list of errors (added here for search engines to find the error):

/usr/bin/ld: /tmp/cc4i6mNS.o: in function `tbb::detail::d1::wait_context::add_reference(long)':
parallel_algorithms.cpp:(.text._ZN3tbb6detail2d112wait_context13add_referenceEl[_ZN3tbb6detail2d112wait_context13add_referenceEl]+0x6c): undefined reference to `tbb::detail::r1::notify_waiters(unsigned long)'
/usr/bin/ld: /tmp/cc4i6mNS.o: in function `tbb::detail::d1::execution_slot(tbb::detail::d1::execution_data const&)':
parallel_algorithms.cpp:(.text._ZN3tbb6detail2d114execution_slotERKNS1_14execution_dataE[_ZN3tbb6detail2d114execution_slotERKNS1_14execution_dataE]+0x14): undefined reference to `tbb::detail::r1::execution_slot(tbb::detail::d1::execution_data const*)'
/usr/bin/ld: /tmp/cc4i6mNS.o: in function `tbb::detail::d1::spawn(tbb::detail::d1::task&, tbb::detail::d1::task_group_context&)':
parallel_algorithms.cpp:(.text._ZN3tbb6detail2d15spawnERNS1_4taskERNS1_18task_group_contextE[_ZN3tbb6detail2d15spawnERNS1_4taskERNS1_18task_group_contextE]+0x30): undefined reference to `tbb::detail::r1::spawn(tbb::detail::d1::task&, tbb::detail::d1::task_group_context&)'
/usr/bin/ld: /tmp/cc4i6mNS.o: in function `tbb::detail::d1::execute_and_wait(tbb::detail::d1::task&, tbb::detail::d1::task_group_context&, tbb::detail::d1::wait_context&, tbb::detail::d1::task_group_context&)':
parallel_algorithms.cpp:(.text._ZN3tbb6detail2d116execute_and_waitERNS1_4taskERNS1_18task_group_contextERNS1_12wait_contextES5_[_ZN3tbb6detail2d116execute_and_waitERNS1_4taskERNS1_18task_group_contextERNS1_12wait_contextES5_]+0x2c): undefined reference to `tbb::detail::r1::execute_and_wait(tbb::detail::d1::task&, tbb::detail::d1::task_group_context&, tbb::detail::d1::wait_context&, tbb::detail::d1::task_group_context&)'…

There was no obvious difference between the compiler calls.

g++ -Wall -Werror -Wpedantic -std=c++20 -ltbb -g -O0 -o parallel_algorithms parallel_algorithms.cpp
clang++ -Wall -Werror -Wpedantic -std=c++20 -ltbb -march=native -o parallel_algorithms parallel_algorithms.cpp

After browsing through a lot of search results that don’t explain the problem, I tried to put the -ltbb at the end of the command. If you do this, then everything works fine with g++:

g++ -Wall -Werror -Wpedantic -std=c++20 -g -O0 -o parallel_algorithms parallel_algorithms.cpp -ltbb

🥳 oneTBB doesn’t require much, but its link option has to be at the end of the compiler command. Apparently clang++ does something different when resolving symbols. Good to know.

Data Conversion in Code – Casting Data Types

A lot of code deals with transforming data from one data type to another. There are many reasons for doing this. The different sizes of integer numbers and changing their sign is one issue. In the past, casting was easy and error-prone. C++ has different ways to deal with casting data types. The full list of methods involves:

  • static_cast<T> is for related data. The checks perform at compile time, not at run-time.
  • dynamic_cast<T> is useful for pointers. It checks the data type at run-time and ensures that unsafe downcasts are avoided. Catching errors usually results in an exception.
  • const_cast<T> does not change the data type. It just tells the compiler not to expect any write operations on the data. This is useful for optimisation choices implemented by the compiler.
  • reinterpret_cast<T> is the worst option for converting/casting data. The compiler will enforce the new data type. So this should only be an option if you know what you are doing.
  • std::move is new in C++11/C++14. It doesn’t change the data type, but the location of the data. You can move values and instances with it. The main reason for moving things is to protect them from unintended modification.

While you can still use the old-fashioned C-style cast operation, you should never do this in C++. The list of options document more clearly what you intend to do in your code. For more complex operations, it is recommended to create a method for performing the transformation. This method can also enforce range checks or other tests before converting data. There is a blog article that illustrates the conversion methods by using Han Solo from Star Wars.

Using constants and constant expressions is something you absolutely should adopt for your coding style. Marking data storage as read-only allows the compiler to create faster code. In addition, you will get errors when other parts of your code modifies data that should not be changed.

C++ Style Improvements

The international tidy person symbol. Source: https://commons.wikimedia.org/wiki/File:International_tidyman.svgRecently I experimented with Clang Tidy. I let it analyse some of my code and looked at the results. It was really helpful, because it suggested a lot of slight improvements that do not radically change your code. If you started with C and switched to C++ at some point, you will definitely still have some C habits in your code. Especially APIs expecting pointers to character buffers often use C-style arrays. There are ways to get rid of these language constructs, but you have to work on your habits. Avoiding C-style arrays is straightforward. You just have to do it. There is a reason why C and C++ are different programming languages.

Then there are the new C++ standards. In terms of security and convenience, the C++11 and C++17 standards are milestones. You should update your code to at least the C++11 standard. Clang Tidy can help you. There is a class of checks covering the C++ Core Guidelines. The guidelines are a compilation of recommendations to refactor your code into a version that is less error-prone and more maintainable. Keep in mind that you can never apply all rules and recommendations to your code base. Pick a set of five to seven issues and try to implement them. Repeat when you have time or need to change something in the code anyway.

Please stop anthropomorphising Algorithms

The picture showes a box with lots of cables and technical-looking decorations. It symbolises a machine and was created by the Midjourney image generator.If you read the news, the articles are full of hype for the new Large Language Models (LLMs). People do amazing and stupid things by chatting with these algorithms. The problem is that LLMs are just algorithms. They have been „trained“ with large amounts of data, hence the first „L“. There is no personality or cognitive process involved. Keep the following properties in mind when using these algorithms.

  • LLMs stem from the field of artificial intelligence, which is a field of computer science.
  • LLMs perform no cognitive work. The algorithms do not think, they just remix, filter, and repeat. They can’t create anything new better than a random generator and a Markov chain of sufficiently high order.
  • LLMs have no opinion.
  • LLMs feature in-built hallucinations. This means the algorithms can lie. Any of the current (and possibly future models of the same kind) will generate nonsense at some point.
  • LLMs can be manipulated by conversation (by using variations of prompts).
  • LLMs are highly biased, because their training data lacks different perspectives (given their size and cultural background of the data, especially languages used for training).
  • LLMs can’t be debugged reliably. Most fixes to problems comprise input or output filters. Essentially, these filters are allow or block lists. This is the weakest form of using filters for protection.

So there are a lot of arguments for not treating LLMs as something with a opinion, personality, or intelligence. These models can mimick language. When training, they always learn language patterns. Yes, LLMs belong to the research field of artificial intelligence, but there is no thinking going on. The label „AI“ doesn’t describe what the algorithm is doing. This is not news. Emily Bender published an article titled „Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’“ in the Seattle Times in May 2022. Furthermore, there is a publication from Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. Its title is On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. The publication date is March 2021. They published both articles way before the „new“ chat algorithms and LLM-powered „search engines“ hit the marketing departments. You have been warned.

So, this article is not meant to discourage anyone from exploring LLMs. You just need to be careful. Remember that algorithms are seldom a silver bullet that can solve all your problems or make them go away. This is especially true if you want to use LLMs for your software development cycle.

Update: If you really want to explore the uses of synthetic text generation, please watch the presentation “ChatGP-why: When, if ever, is synthetic text safe, appropriate, and desirable?” by Emily Bender.

Continuous Integration is no excuse for a complex Build System

The picture shows a computer screen with keyboard in cartoon style. The screen shows a diagram of code flows with red squares as a marker for errors.Continuous Integration (CI) is a standard in software development. A lot of companies use it for their development process. It basically means using automation tools to test new code more frequently. Instead of continuous, you can also use the word automated, because CI can’t work manually. Modern build systems comprise scripts and descriptive configurations that invoke components of the toolchain in order to produce executable code. Applications build with different programming languages can invoke a lot of tools with individual configurations. The build system is also a part of the code development process. What does this mean for CI in terms of secure coding?

First, if you use CI methods in your development cycle, then make sure you understand the build system. When working with external consultants that audit your code, the review must be possible without the CI pipeline. In theory, this is always the case, but I have seen code collections that cannot be built easily, because of the many configuration parameters hidden in hundreds of directories. Some configuration is old and use environment variables to control how the toolchain has to translate the source. Especially cross-platform code is difficult to analyse because of the mixture of tools. Often it is only possible to study the source. This is a problem, because a code review also needs to rebuild the code with changing parameters (for example, changing compiler flags, replacing compilers, adding analyzers, etc.). If the build process doesn’t allow this, then you have a problem. This makes switching to different tools impossible, which is also necessary when you need to test new versions of your programming language or need to migrate old parts of your code to a newer standard.

Furthermore, if your code cannot be built outside your CI pipeline, then reviews are basically impossible. Just reading the source means that a lot of testing cannot be done. Ensure that your build systems do not grow into a complex creation no one wants to touch any more. The rules of secure and clean coding also apply to your CI pipeline. Create individual packages. Divide the build into modules, so that you can assemble the final application from independent building blocks. Also, refactor your build configuration. Make is simpler and remove all workarounds. Once the review gets stuck and auditors have to read your code like the newspaper, it is too late.

Page 2 of 4

Powered by WordPress & Theme by Anders Norén