C23: The Modern Standard C Deserves
I’ve been following the C23 standardization process closely, and I’m genuinely excited about what the committee has delivered. After years of stagnation, C23 finally addresses real pain points in systems programming without bloating the language. It’s not revolutionary, but it’s pragmatic, exactly what C needed. Let me walk through the major additions that actually matter, because there’s a lot of noise out there about features that sound cool but solve marginal problems.
nullptr and Type Safety Finally
For decades, C developers have had to contend with NULL, which is fundamentally broken. NULL is typically defined as (void*)0 or sometimes just 0, which creates ambiguity. Is it a pointer? Is it an integer? The compiler has to guess, and that guess is often wrong in edge cases like function overloads or variadic functions.
C23 adopts nullptr from C++11, and I think it’s one of the most important additions. It’s not flashy, but it solves a real problem that’s bitten me multiple times in production code.
// C99/C11: The ambiguity problem
void process(int x) { printf("integer: %d\n", x); }
void process_ptr(void* p) { printf("pointer\n"); }
process(NULL); // Which version gets called? Undefined behavior territory.
With C23:
#include <stddef.h>
void process(int x) { printf("integer: %d\n", x); }
void process_ptr(void* p) { printf("pointer\n"); }
process(nullptr); // Compiler error: nullptr cannot convert to int
process_ptr(nullptr); // Unambiguous: calls process_ptr
nullptr has type nullptr_t, which can implicitly convert to any pointer type but not to integers. This is type safety done right. It’s a small change, but it prevents entire classes of bugs that are notoriously hard to debug.
const vs constexpr
const in C is not a constant, it’s a read only qualifier. This distinction matters more than you’d think.
The const Problem
const int max_users = 10;
int user_array[max_users]; // Compiler error in C99/C11!
Why doesn’t this work? Because const int max_users = 10; declares a read only variable, not a compile time constant. The value exists in memory. The compiler cannot use it in contexts requiring constant expressions, like array dimensions. This is one of the most frustrating edge cases in C:
// This won't compile because max_users isn't truly constant at compile time
const int size = 10;
int arr[size]; // Error: VLA required
// But this works (VLA)
int n = 10;
int arr2[n]; // OK, variable length array
The committee couldn’t “fix” const without breaking existing code and diverging further from C++. So they introduced constexpr.
The constexpr Solution
constexpr is the actual compile time constant. It must be initialized with a constant expression, and it is itself a constant expression:
constexpr int max_users = 10;
int user_array[max_users]; // Works! Compiler knows this is truly constant.
// You can use constexpr in switch cases
switch(operation) {
case max_users: // OK
break;
}
Here’s the critical difference:
| Feature | const int |
constexpr int |
|---|---|---|
| Memory | Lives in memory (or might be optimized out) | True compile time constant, not in memory |
| Initialization | Can be const int x; then assigned |
Must be initialized at declaration |
| Constant Expression | No | Yes, can be used in array sizes, case labels |
| External Linkage | Can be extern const int x; |
Cannot be extern |
| Evaluation Time | Runtime (often optimized) | Always compile time |
// This demonstrates the edge case that trips everyone up
const int runtime_val = some_function(); // Memory allocated, call at runtime
int arr1[runtime_val]; // Error: not a constant expression
constexpr int compile_val = 42; // No memory, pure compile time
int arr2[compile_val]; // OK
The reason const doesn’t work in array dimensions is subtle but important: const is a type qualifier, meaning it affects how the compiler treats the variable in context (you can’t modify it), but it doesn’t change the nature of the variable itself. The variable still has a runtime address. C doesn’t have template metaprogramming like C++, so there’s no way to evaluate arbitrary const variables at compile time.
constexpr solves this by being a storage class specifier (like static or extern) that explicitly tells the compiler “this must be a compile time constant, no exceptions.”
#embed
This feature might seem niche, but it solves a real problem I’ve faced in embedded and system programming: embedding binary assets (textures, fonts, data files, even firmware blobs) directly into your executable.
Before C23, your options were terrible:
- Use
xxdorobjcopyto generate C source files (messy build process) - Base64 encode and decode at runtime (performance killer)
- Link in binary objects and use linker scripts (platform specific nightmare)
With #embed, it’s one line:
// Embed a PNG icon directly into the executable
const unsigned char icon_data[] = {
#embed "icon.png"
};
// You can even specify how many bytes to read
unsigned char partial[] = {
#embed "large_file.bin" limit(1024)
};
// Add prefix and suffix bytes for convenience
unsigned char palette[] = {
#embed "colors.raw"
prefix(0xFF)
suffix(0x00)
};
The preprocessor reads the binary file and injects it as a byte sequence. No runtime overhead. No build tool hell. This is particularly valuable for embedded systems where you’re shipping firmware or game assets.
I’ve spent way too many hours dealing with build scripts that generate lookup tables from data files. #embed eliminates that entire class of complexity.
_BitInt(N) Hardware Precise Types
One of the most useful additions for systems programming. _BitInt(N) lets you specify an integer type with an exact bit width, useful for hardware registers, network protocols, or tight memory constraints.
// Declare integers with exact bit widths
unsigned _BitInt(7) signal_strength; // 0 127 range
_BitInt(12) temperature; // Custom 12 bit value
unsigned _BitInt(24) rgb_color; // 24 bit color (like hardware expects)
// Particularly useful for hardware registers
typedef struct {
unsigned _BitInt(4) version;
unsigned _BitInt(4) type;
unsigned _BitInt(8) length;
} ProtocolHeader;
// Before _BitInt, you'd need messy bit fields:
typedef struct {
unsigned version : 4;
unsigned type : 4;
unsigned length : 8;
} OldWay; // Alignment, padding, and signedness are all implementation defined
Bit fields in C are notoriously underspecified, their alignment, padding, and even signedness are implementation defined. _BitInt solves this by being explicitly defined by the standard. This matters when you’re working with hardware where every bit counts.
For cryptography libraries, this is huge. OpenSSL and other crypto libs implement arbitrary precision integers with custom big int types. With Clang’s support for _BitInt(8388608) (8 million bits!), these could theoretically be replaced with standard language features.
// Crypto example: 2048 bit RSA key
typedef unsigned _BitInt(2048) RSA2048Key;
// Before: had to roll your own bigint implementation
typeof and typeof_unqual
Stop Writing Complicated Declarations
C declarations are famously hard to read. int *(*(*fp)(void))[10] makes everyone’s brain hurt. typeof gives you a way to reference types in a more intuitive way:
// Declare complicated types intuitively
int get_status(void);
typeof(get_status()) status; // Type matches return type of get_status
// Array of 100 pointers to int
typeof(int *[100]) arr_of_ptrs; // Clearer than int *arr_of_ptrs[100];
// Pointer to function returning int
typeof(int(void)) *func_ptr; // func_ptr is pointer to function returning int
The real power is in generic programming and macros:
#define SWAP(a, b) \
do { \
typeof(a) temp = a; \
a = b; \
b = temp; \
} while(0)
SWAP(x, y); // Works for any type!
typeof_unqual removes qualifiers, useful when you need the unqualified version of a type:
const int ci = 42;
typeof(ci) ci2; // ci2 is const int
typeof_unqual(ci) ci2u; // ci2u is int (const removed)
This is particularly useful in generic code where you might receive a const qualified type but need to work with the unqualified version.
Enums with Explicit Underlying Types
Before C23, enums were limited. Their underlying type was implementation defined, which meant you couldn’t guarantee size or signedness:
// Old way: pray the compiler uses what you expect
enum Status { IDLE, RUNNING, STOPPED };
// Underlying type is unspecified, might be int, might be uint, might be char
sizeof(enum Status); // Could be 1, 2, 4, or 8 bytes depending on compiler
C23 lets you specify:
// Guarantee it's exactly 8 bits, unsigned
enum Status : unsigned char {
IDLE = 0,
RUNNING = 1,
STOPPED = 2
};
// Or use _BitInt for precise control
enum NetworkState : unsigned _BitInt(4) {
DISCONNECTED = 0,
CONNECTING = 1,
CONNECTED = 2,
ERROR = 15
};
sizeof(enum NetworkState); // Guaranteed to be 1 byte (4 bits padded)
This matters for protocol headers, binary formats, and any case where you’re matching a specific layout. I’ve had to use bit fields for this in the past, which was always a compromise.
Standard Attributes
C23 finally adopts the [[...]] syntax from C++. These are standardized compiler hints:
// Mark a function as deprecated
[[deprecated]]
void old_api(void) {
// ...
}
// Warn if return value is ignored
[[nodiscard]]
int acquire_resource(void) {
// ...
}
// Suppress unused variable warnings
void process(void) {
[[maybe_unused]] int debug_value = compute_expensive_debug_info();
// ...
}
// Mark intentional fall through in switch
switch(state) {
case LOADING:
initialize();
[[fallthrough]]; // Intentional, not a bug
case READY:
process();
break;
}
These are particularly useful for large codebases where you want to enforce practices without scattered pragmas or compiler flags.
Checked Arithmetic Functions
C23 adds <stdckdint.h> with type generic checked arithmetic macros that prevent integer overflow bugs:
#include <stdckdint.h>
unsigned int a = UINT_MAX, b = 1;
unsigned int result;
// Without checked arithmetic, this silently overflows
// result = a + b; // Undefined behavior
// With checked arithmetic
if (ckd_add(&result, a, b)) {
// Overflow occurred!
fprintf(stderr, "Addition overflow\n");
} else {
// result is safe to use
printf("Result: %u\n", result);
}
// Also works for subtraction and multiplication
ckd_sub(&result, a, b);
ckd_mul(&result, a, b);
This is critical for security. Integer overflow vulnerabilities have been exploited for decades. Having standard library support for checked arithmetic is long overdue.
Binary Literals and Digit Separators
Small but surprisingly useful for readability:
// Binary literals for bit manipulation
unsigned flags = 0b1010_1100; // Much clearer than 0xAC
// Digit separators for large numbers
unsigned long huge = 1'000'000'000'000UL; // vs 1000000000000UL
// Bitmasks are now readable
unsigned mask = 0xFFFF'FFFF'0000'0000UL;
When you’re working with bitwise operations and hardware registers, this makes code significantly more self documenting.
Auto Type Deduction
C23 adds auto for type inference, but it’s more limited than C++:
auto x = 42; // x is int
auto y = 3.14; // y is double
auto ptr = &x; // ptr is int*
// Cannot be used in function parameters or return types
// auto func(auto x) { } // NOT valid C23
The limitation prevents full template like metaprogramming, which keeps C’s compilation model simple. But for local variables, it reduces boilerplate.
The K&R Function Definition Purge
Finally, the standard removed K&R style function declarations. This:
// Old K&R style, REMOVED from C23
int add(a, b)
int a, b;
{
return a + b;
}
Is no longer valid. You must use prototypes:
int add(int a, int b) {
return a + b;
}
This is a breaking change for legacy code, but Ive not personally done this, so I have no idea if this was necessary or not.
Why These Additions Matter
I want to be honest, C23 doesn’t revolutionize the language. There’s no garbage collection, no exceptions, no memory safety guarantees. What it does is address years of accumulated friction points while staying true to C’s philosophy of “you know what you’re doing.”
The community response has been largely positive because these features solve real problems without adding unnecessary complexity. constexpr lets you write proper compile time constants. #embed eliminates build script hell. _BitInt makes hardware programming less of a wrestle with the type system.
The additions are pragmatic, which is exactly what modern C should be. Not trendy, not revolutionary, but genuinely useful for systems programmers who want a better C without becoming C++.
If you’re still on C99 or C11, there’s no urgency to migrate. But if you’re starting new projects, especially in embedded systems, performance critical code, or anywhere you’re dealing with binary data and hardware interfaces, C23 has enough useful features to make the jump worthwhile.
The standard is finalized. Compiler support is rolling out across GCC, Clang, and even MSVC. Give it a shot. I think you’ll find it’s the C we should have after 50 years of its existance.