Testing Executables

Learn the standard pattern for testing application logic by refactoring it into a linkable library

Greg Filak
Published

So far in this chapter, our tests have focused on our GreeterLib. Testing libraries is straightforward because they are designed to be linked against and used by other code, including our test executables.

But what about our GreeterApp? How do we test the code inside main.cpp? This presents a technical challenge: an executable target, by definition, has a main() function and is a final, standalone product. You cannot easily link one executable into another; one immediate problem is that we'd end up with two main() functions, which the linker would reject as a "multiple definition" error.

This lesson tackles this problem by introducing a standard refactoring pattern. We'll learn how to separate an application's core logic from its entry point, making the logic testable and the overall design more modular and robust.

Organizing Tests for Scalability

As a project grows, so does its test code. A single tests/main.cpp file quickly becomes unmanageable. There are many ways to organize our tests, but one option is to structure our tests/ directory to mirror our main project's source layout.

For our Greeter project, this would mean creating subdirectories for app and greeter within our /tests directory:

Greeter/
├─ app/
├─ greeter/
└─ tests/
  ├─ app/                    
  │  ├─ CMakeLists.txt      
  │  └─ test_app.cpp        
  ├─ greeter/                
  │  ├─ CMakeLists.txt      
  │  └─ test_greeter.cpp    
  └─ CMakeLists.txt

Making Executables Testable

This structure immediately presents a new challenge. We want to write tests for the logic inside our GreeterApp executable, but how do you test a main() function?

An executable target cannot be linked into another target. It's a final product with a single entry point. Trying to link it into a test executable would create a conflict.

The solution is a refactoring pattern: isolate the code you want to test into a linkable unit (a library). This library can contain all the code that our executable needs. The executable target then just becomes a thin wrapper - little more than a main() that calls a run_app() function defined in our library.

Step 1: Create an Application Logic Library

We'll move the core logic of our application out of main.cpp and into a new library. Let's call it GreeterAppLogic.

First, let's create the source files for this new library.

Files

app
Select a file to view its content

Our main.cpp now becomes a simple, untestable forwarder.

app/src/main.cpp

#include <app/run.h>

// The main function is just a thin entry point
int main(int argc, char* argv[]) {
  return run_app(argc, argv);
}

Step 2: Update the App's CMakeLists.txt

Now, we update app/CMakeLists.txt to define this new two-part structure: a logic library and an executable that uses it.

app/CMakeLists.txt

cmake_minimum_required(VERSION 3.23)

# 1. The library containing the testable logic
add_library(GreeterAppLogic src/run.cpp)

target_link_libraries(GreeterAppLogic PRIVATE GreeterLib)

target_sources(GreeterAppLogic PUBLIC
  FILE_SET HEADERS
  BASE_DIRS "include"
  FILES "include/app/run.h"
)

# 2. The thin executable wrapper
add_executable(GreeterApp src/main.cpp)
target_link_libraries(GreeterApp PRIVATE GreeterAppLogic)

Step 3: Update the Test CMakeLists.txt

With our logic now in a library, our test executable can link against it correctly.

First, the root tests/CMakeLists.txt becomes a simple dispatcher:

tests/CMakeLists.txt

# This file just orchestrates the tests in subdirectories
add_subdirectory(greeter)
add_subdirectory(app)

The test for our GreeterLib remains mostly unchanged, just moved to its new location. We've also renamed the target from GreeterTests to GreeterLibTests for clarity:

tests/greeter/CMakeLists.txt

cmake_minimum_required(VERSION 3.23)
find_package(GTest REQUIRED)

add_executable(GreeterLibTests test_greeter.cpp)

target_link_libraries(GreeterLibTests PRIVATE
  GreeterLib
  GTest::gtest
  GTest::gmock
  GTest::gmock_main
)

gtest_discover_tests(GreeterLibTests)

Finally, the new GreeterAppTests can link against GreeterAppLogic.

Files

tests
Select a file to view its content

With this structure, our test suite is now organized and easier to scale. We should make sure our project still builds and runs as expected:

cmake --preset default
cmake --build --preset default
./build/app/GreeterApp
Have a nice day! Hello from the Greeter class!

And that our existing library tests still pass, as well as our new app test:

ctest --default
...
3/4 Test #3: DayOfWeekGreetings/GreeterDayTest.GreetsCorrectlyForDay
4/4 Test #4: GreeterAppTests.RunsWithoutCrashing

100% tests passed, 0 tests failed out of 4

Test-Driven Development (TDD)

Test-Driven Development (TDD) is a process that flips the usual workflow on its head. Instead of writing production code and then writing tests for it, you write the tests first. This has several advantages:

  • Drives Better Design: Before you write a single line of implementation, you must first think as a consumer of your own code. What should the function be called? What arguments should it take? What should it return? Writing the test first forces you to design a friendly API from the outside in.
  • Creates Specifications: A test is an unambiguous definition of what the production code must do to be considered "correct." By considering and testing edge cases before writing the implementation, we build a clearer target to aim for, which helps us make better decisions on how the feature should be implemented.
  • It helps with development: When we're writing our code, we now have the option of running the tests to see what our logic is doing. This is often quicker than stepping through in a debugger or logging out results.
  • It tests the tests: In complex scenarios, it's surprisingly easy to write tests that always pass, regardless of how the system is behaving. If a test passes before you've written the implementation, you know the test itself is flawed.

Practical Example: Adding Logging

Let's use TDD to add a new feature to our GreeterApp. The requirement is: when run_app() is executed, it should create a log file named app_log.txt and write a timestamped startup message to it.

Write a Failing Test

First, we'll write a test in tests/app/test_app.cpp that assumes this feature already exists:

tests/app/test_app.cpp

#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <app/run.h>
#include <fstream>
#include <string>

class AppLogTest : public testing::Test {
protected:
  const std::string log_filename{"app_log.txt"};
};

TEST_F(AppLogTest, LogsStartupMessageToFile) {
  run_app(0, nullptr);

  std::ifstream log_file(log_filename);
  ASSERT_TRUE(log_file.is_open());

  std::string line;
  std::getline(log_file, line);

  // Use a matcher to check the format
  EXPECT_THAT(line, testing::HasSubstr(
    "App starting up"
  ));
}

Let's walk through the logic here:

  • We use a test fixture in this example - AppLogTest. This is not necessary for now - we could just use a regular test that includes the app_log.txt string - but we'll expand this file soon and the fixture will be useful.
  • We create a using std::ifstream to interact with the file.
  • We first check if the file exists with the file stream's is_open() method. We use ASSERT here to create a fatal error if the file doesn't exist, as there's no point in continuing in that scenario
  • We use std::getline() to copy the first line from the stream into a std::string called line
  • We expect that the line contains the substring "App starting up". Our log file will have additional information too, such as the timestamp. We can test for that using regex techniques (testing::MatchesRegex or testing::ContainsRegex) if preferred, but a simple check using testing::HasSubstr is good enough for this example

Note that this test might find a log file that was created in a previous test run. This would mean our test would pass even though our current executable is broken and is no longer creating log files. We'll fix this flaw later in the lesson.

If we build and run this test now, it will fail because app_log.txt isn't being created. This failure is a good thing, as we haven't built that feature yet:

cmake --build --preset default
ctest --preset default
Value of: log_file.is_open()
  Actual: false
Expected: true

[  FAILED  ] AppLogTest.LogsStartupMessageToFile (0 ms)

Making the Test Pass

Now, we write the code in app/src/run.cpp to make the test pass. We'll add the spdlog dependency and the file logging logic.

First, let's update vcpkg.json to include spdlog:

vcpkg.json

{
  "name": "greeter",
  "dependencies": [
    "gtest",
    "spdlog"
  ]
}

We'll also update app/CMakeLists.txt to find and link spdlog to our logic library:

app/CMakeLists.txt

cmake_minimum_required(VERSION 3.23)

add_library(GreeterAppLogic src/run.cpp)
find_package(spdlog CONFIG REQUIRED)

target_link_libraries(GreeterAppLogic PRIVATE
  GreeterLib
  spdlog::spdlog
)

target_sources(GreeterAppLogic PUBLIC
  FILE_SET HEADERS
  BASE_DIRS "include"
  FILES "include/app/run.h"
)

add_executable(GreeterApp src/main.cpp)
target_link_libraries(GreeterApp PRIVATE GreeterAppLogic)

Now, let's add the logging code to run_app().

app/src/run.cpp

#include <app/run.h>
#include <greeter/Greeter.h>
#include <iostream>
#include <spdlog/sinks/basic_file_sink.h>
#include <spdlog/spdlog.h>

int run_app(int argc, char* argv[]) {
  // Create a logger that writes to "app_log.txt"
  auto file_logger{
    spdlog::basic_logger_mt("app_logger", "app_log.txt")
  };
  spdlog::set_default_logger(file_logger);

  spdlog::info("App starting up");

  Greeter my_greeter;
  std::cout << my_greeter.greet();

  // Flush logs to ensure they're written
  spdlog::shutdown();
  return 0;
}

Finally, if we rebuild and run our tests, the LogsStartupMessageToFile test should pass.

cmake --build --preset default
ctest --default
...
4/4 Test #4: AppLogTest.LogsStartupMessageToFile ... Passed
100% tests passed, 0 tests failed out of 4

Test SetUp() and TearDown()

Previously, we saw how creating a fixture in the form of a class derived from testing::Test lets us share variables and functions that a collection of tests might need.

This may simply involve creating some member variables or, in more complex cases, we can add a constructor or override the inherited SetUp() method for logic that needs to run before every test that uses this fixture:

class AppLogTest : public testing::Test {
protected:
  const std::string log_filename{"app_log.txt"};

  void SetUp() override {
    // SetUp logic...
  }
};

We can extend this to have our tests clean up after themselves by implementing a destructor, or overriding the TearDown() method:

class AppLogTest : public testing::Test {
protected:
  const std::string log_filename{"app_log.txt"};

  void SetUp() override {
    // SetUp logic...
  }

  void TearDown() override {
    // TearDown logic...
  }
};

This is useful when our tests are creating long-lasting changes that may affect future tests. Our test that calls run_app() is an example of this. It creates a file on the filesystem, which will persist even after the test ends.

This is a problem because if, for example, we make a change that breaks our logging, our test might still pass. It simply reads the log file from the previous run and assumes all is well.

Even if we clear out our build directory between each test run, this lingering file can still cause issues within a single test run. For example, if we write a second test that runs our app in some different context and checks for the log file. That log file may already exist from the other test, giving us unreliable reports.

Below, we write a new test that asserts that a log file exist, but doesn't actually do anything that would cause that log file to be created:

tests/app/test_app.cpp

#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <app/run.h>
#include <fstream>
#include <string>

class AppLogTest : public testing::Test {
protected:
  const std::string log_filename{"app_log.txt"};
};

TEST_F(AppLogTest, LogsStartupMessageToFile) {/*...*/} TEST_F(AppLogTest, AnotherTest) { std::ifstream log_file(log_filename); // ...don't create a log file ASSERT_TRUE(log_file.is_open()); }

This test should fail but, if it runs after our existing test created that log file, it will incorrectly pass:

cmake --build --preset default
ctest --preset default
5/5 Test #5: AppLogTest.AnotherTest ... Passed
100% tests passed, 0 tests failed out of 5

We can solve this by adding a TearDown() function to our fixture. This will clean up any residual effects of tests using this fixture, ensuring each test is a discrete, self-contained unit:

tests/app/test_app.cpp

#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <app/run.h>
#include <fstream>
#include <string>

class AppLogTest : public testing::Test {
protected:
  const std::string log_filename{"app_log.txt"};

  void TearDown() override {
    // Clean up the log file after each test
    std::remove(log_filename.c_str());
  }
};

TEST_F(AppLogTest, LogsStartupMessageToFile) {/*...*/}
TEST_F(AppLogTest, AnotherTest) {/*...*/}

Our AnotherTest should now reliably fail as expected:

cmake --build --preset default
ctest --preset default
Value of: log_file.is_open()
  Actual: false
Expected: true

[  FAILED  ] AppLogTest.AnotherTest (0 ms)

Summary

This lesson addressed the challenge of testing code that lives inside an executable. By applying a simple refactoring pattern, we made our application's logic fully testable.

  • The Problem: Executable targets can't be linked against, making it impossible for a test executable to directly call functions defined in the application's main.cpp.
  • The Solution: Isolate the application's core logic into its own library (GreeterAppLogic). The executable (GreeterApp) becomes a thin, untestable wrapper that simply calls a function from the logic library.
  • Testable by Design: The test executable can then link against the new logic library, giving it full access to all the application's core functionality for testing.
  • TDD for Applications: We saw how to write a failing test first and then implement the feature in the logic library to make it pass.
  • Test Isolation: We used gtest fixtures with SetUp() and TearDown() methods to ensure our tests are isolated, cleaning up side effects like created files so that one test cannot influence another.
Next Lesson
Lesson 52 of 55

Managing a Test Suite

Move beyond individual tests to manage a full test suite. Learn to organize tests, run tests in parallel, filter by labels, and measure our code coverage

Have a question about this lesson?
Answers are generated by AI models and may not be accurate