What's new in .NET Core 3.0 (Preview 2)

This article describes what is new in .NET Core 3.0 (preview 2). One of the biggest enhancements is support for Windows desktop applications (Windows only). By utilizing a .NET Core 3.0 SDK component called Windows Desktop, you can port your Windows Forms and Windows Presentation Foundation (WPF) applications. To be clear, the Windows Desktop component is only supported and included on Windows. For more information, see the section Windows desktop below.

.NET Core 3.0 adds support for C# 8.0.

Download and get started with .NET Core 3.0 Preview 2 right now on Windows, Mac and Linux. You can see complete details of the release in the .NET Core 3.0 Preview 2 release notes.

For more information about what was released with each version, see the following announcements:

C# 8

.NET Core 3.0 supports C# 8, and as of .NET Core 3.0 Preview 2, supports these new features. For more information about C# 8.0 features, see the following blog posts:

Ranges and indices

The new Index type can be used for indexing. You can create one from an int that counts from the beginning, or with a prefix ^ operator (C#) that counts from the end:

Index i1 = 3;  // number 3 from beginning
Index i2 = ^4; // number 4 from end
int[] a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
Console.WriteLine($"{a[i1]}, {a[i2]}"); // "3, 6"

There is also a Range type, which consists of two Index values, one for the start and one for the end, and can be written with a x..y range expression (C#). You can then index with a Range in order to produce a slice:

var slice = a[i1..i2]; // { 3, 4, 5 }

Async streams

The IAsyncEnumerable<T> type is a new asynchronous version of IEnumerable<T>. The language lets you await foreach over IAsyncEnumerable<T> to consume their elements, and use yield return to them to produce elements.

The following example demonstrates both production and consumption of async streams. The foreach statement is async and itself uses yield return to produce an async stream for callers. This pattern (using yield return) is the recommended model for producing async streams.

async IAsyncEnumerable<int> GetBigResultsAsync()
{
    await foreach (var result in GetResultsAsync())
    {
        if (result > 20) yield return result;
    }
}

In addition to being able to await foreach, you can also create async iterators, for example, an iterator that returns an IAsyncEnumerable/IAsyncEnumerator that you can both await and yield in. For objects that need to be disposed, you can use IAsyncDisposable, which various BCL types implement, such as Stream and Timer.

Note

You need .NET Core 3.0 Preview 2 to use async streams if you want to develop with either Visual Studio 2019 or the latest preview of the C# extension for Visual Studio Code. If you are using .NET Core 3.0 Preview 2 at the command line, then everything will work as expected.

Using Declarations

Using declarations are a new way to ensure your object is properly disposed. A using declaration keeps the object alive while it is still in scope. Once the object becomes out of scope, it is automatically disposed. This will reduce nested using statements and make your code cleaner.

static void Main(string[] args)
{
    using var options = Parse(args);
    if (options["verbose"]) { WriteLine("Logging..."); }

} // options disposed here

Switch Expressions

Switch expressions are a cleaner way of doing a switch statement but, since it's an expression, returns a value. Switch expressions are also fully integrated with pattern matching, and use the discard pattern, _, to represent the default value.

You can see the syntax for switch expressions in the following example:

static string Display(object o) => o switch
{
    Point { X: 0, Y: 0 }         => "origin",
    Point { X: var x, Y: var y } => $"({x}, {y})",
    _                            => "unknown"
};

There are two patterns at play in this example. o first matches with the Point type pattern and then with the property pattern inside the {curly braces}. The _ describes the discard pattern, which is the same as default for switch statements.

Patterns enable you to write declarative code that captures your intent instead of procedural code that implements tests for it. The compiler becomes responsible for implementing that boring procedural code and is guaranteed to always do it correctly.

There will still be cases where switch statements will be a better choice than switch expressions and patterns can be used with both syntax styles.

For more information, see Do more with patterns in C# 8.0.

IEEE Floating-point improvements

Floating point APIs are in the process of being updated to comply with IEEE 754-2008 revision. The goal of these changes is to expose all "required" operations and ensure that they are behaviorally compliant with the IEEE spec.

Parsing and formatting fixes:

  • Correctly parse and round inputs of any length.
  • Correctly parse and format negative zero.
  • Correctly parse Infinity and NaN by performing a case-insensitive check and allowing an optional preceding + where applicable.

New Math APIs have:

  • BitIncrement/BitDecrement
    Corresponds to the nextUp and nextDown IEEE operations. They return the smallest floating-point number that compares greater or lesser than the input (respectively). For example, Math.BitIncrement(0.0) would return double.Epsilon.

  • MaxMagnitude/MinMagnitude
    Corresponds to the maxNumMag and minNumMag IEEE operations, they return the value that is greater or lesser in magnitude of the two inputs (respectively). For example, Math.MaxMagnitude(2.0, -3.0) would return -3.0.

  • ILogB
    Corresponds to the logB IEEE operation which returns an integral value, it returns the integral base-2 log of the input parameter. This is effectively the same as floor(log2(x)), but done with minimal rounding error.

  • ScaleB
    Corresponds to the scaleB IEEE operation which takes an integral value, it returns effectively x * pow(2, n), but is done with minimal rounding error.

  • Log2
    Corresponds to the log2 IEEE operation, it returns the base-2 logarithm. It minimizes rounding error.

  • FusedMultiplyAdd
    Corresponds to the fma IEEE operation, it performs a fused multiply add. That is, it does (x * y) + z as a single operation, there-by minimizing the rounding error. An example would be FusedMultiplyAdd(1e308, 2.0, -1e308) which returns 1e308. The regular (1e308 * 2.0) - 1e308 returns double.PositiveInfinity.

  • CopySign
    Corresponds to the copySign IEEE operation, it returns the value of x, but with the sign of y.

.NET Platform Dependent Intrinsics

APIs have been added that allow access to certain perf-oriented CPU instructions, such as the SIMD or Bit Manipulation instruction sets. These instructions can help achieve big performance improvements in certain scenarios, such as processing data efficiently in parallel. In addition to exposing the APIs for your programs to use, the .NET libraries have begun using these instructions to improve performance.

The following CoreCLR PRs demonstrate a few of the intrinsics, either via implementation or use:

For more information, see .NET Platform Dependent Intrinsics, which defines an approach for defining this hardware infrastructure, allowing Microsoft, chip vendors, or any other company or individual to define hardware/chip APIs that should be exposed to .NET code.

Default executables

.NET Core will now build framework-dependent executables by default. This is new for applications that use a globally installed version of .NET Core. Until now, only self-contained deployments would produce an executable.

During dotnet build or dotnet publish, an executable is created provided that matches the environment and platform of the SDK you are using. You can expect the same things with these executables as you would other native executables, such as:

  • You can double-click on the executable.
  • You can launch the application from a command prompt directly, such as myapp.exe on Windows, and ./myapp on Linux and macOS.

Build copies dependencies

dotnet build now copies NuGet dependencies for your application from the NuGet cache to the build output folder. Previously, dependencies were only copied as part of dotnet publish.

There are some operations, like linking and razor page publishing that will still require publishing.

Local dotnet tools

Warning

There was a change in .NET Core Local Tools between .NET Core 3.0 Preview 1 and .NET Core 3.0 Preview 2. If you tried out local tools in Preview 1 by running a command like dotnet tool restore or dotnet tool install, you need to delete your local tools cache folder before local tools will work correctly in Preview 2. This folder is located at:

On mac, Linux: rm -r $HOME/.dotnet/toolResolverCache

On Windows: rmdir /s %USERPROFILE%\.dotnet\toolResolverCache

If you do not delete this folder, you will receive an error.

While .NET Core 2.1 supported global tools, .NET Core 3.0 now has local tools. Local tools are similar to global tools but are associated with a particular location on disk. This enables per-project and per-repository tooling. Any tool installed locally isn't available globally. Tools are distributed as NuGet packages.

Local tools rely on a manifest file name dotnet-tools.json in your current directory. This manifest file defines the tools to be available at that folder and below. By creating this manifest file at the root of your repository, you ensure anyone cloning your code can restore and use the tools that are needed to successfully work with your code.

To create a dotnet-tools.json manifest file, use:

dotnet new tool-manifest

Add a new tool to the local manifest with:

dotnet tool install <packageId>

You can also list the tools in the local manifest with:

dotnet tool list

To see what tools are installed globally, use:

dotnet tool list -g

When the local tools manifest file is available, but the tools defined in the manifest have not been installed, use the following command to automatically download and install those tools:

dotnet tool restore

Run a local tool with the following command:

dotnet tool run <tool-command-name>

When a local tool is run, dotnet searches for a manifest up the current directory structure. When a tool manifest file is found, it is searched for the requested tool. If the tool is found in the manifest, but not the cache, the user receives an error and needs to run dotnet tool restore.

To remove a tool from the local tool manifest file, run the following command:

dotnet tool uninstall <packageId>

The tool manifest file is designed to allow hand editing – which you might do to update the required version for working with the repository. Here is an example dotnet-tools.json file:

{
  "version": 1,
  "isRoot": true,
  "tools": {
    "dotnetsay": {
      "version": "2.1.4",
      "commands": [
        "dotnetsay"
      ]
    },
    "t-rex": {
      "version": "1.0.103",
      "commands": [
        "t-rex"
      ]
    }
  }
}

For both global and local tools, a compatible version of the runtime is required. Many tools currently on NuGet.org target .NET Core Runtime 2.1. To install those globally or locally, you would still need to install the NET Core 2.1 Runtime.

Windows desktop

Starting with .NET Core 3.0 Preview 1, you can build Windows desktop applications using WPF and Windows Forms. These frameworks also support using modern controls and Fluent styling from the Windows UI XAML Library (WinUI) via XAML islands.

The Windows Desktop component is part of the Windows .NET Core 3.0 SDK.

You can create a new WPF or Windows Forms app with the following dotnet commands:

dotnet new wpf
dotnet new winforms

Visual Studio 2019 adds New Project templates for .NET Core 3.0 Windows Forms and WPF. Designers are still not yet supported. And you can open, launch, and debug these projects in Visual Studio 2019.

Visual Studio 2017 15.9 adds the ability to enable .NET Core previews, but you need to turn that feature on, and it's not a supported scenario.

The new projects are the same as existing .NET Core projects, with a couple additions. Here is the comparison of the basic .NET Core console project and a basic Windows Forms and WPF project.

In a .NET Core console project, the project uses the Microsoft.NET.Sdk SDK and declares a dependency on .NET Core 3.0 via the netcoreapp3.0 target framework. To create a Windows Desktop app, use the Microsoft.NET.Sdk.WindowsDesktop SDK and choose which UI framework to use:

-<Project Sdk="Microsoft.NET.Sdk">
+<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
+   <UseWPF>true</UseWPF>
  </PropertyGroup>
</Project>

To choose Windows Forms over WPF, set UseWindowsForms instead of UseWPF:

<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
-   <UseWPF>true</UseWPF>
+   <UseWindowsForms>true</UseWindowsForms>
  </PropertyGroup>
</Project>

Both UseWPF and UseWindowsForms can be set to true if the app uses both frameworks, for example when a Windows Forms dialog is hosting a WPF control.

Please share your feedback on the dotnet/winforms, dotnet/wpf and dotnet/core repos.

MSIX Deployment for Windows Desktop

MSIX is a new Windows app package format. It can be used to deploy .NET Core 3.0 desktop applications to Windows 10.

The Windows Application Packaging Project, available in Visual Studio 2019, allows you to create MSIX packages with self-contained .NET Core applications.

Note

The .NET Core project file must specify the supported runtimes in the <RuntimeIdentifiers> property:

<RuntimeIdentifiers>win-x86;win-x64</RuntimeIdentifiers>

Fast built-in JSON support

The .NET ecosystem has relied on Json.NET and other popular JSON libraries, which continue to be good choices. Json.NET uses .NET strings as its base datatype, which are UTF-16 under the hood.

The new built-in JSON support is high-performance, low allocation, and based on Span<byte>. Three new main JSON-related types have been added to .NET Core 3.0 the System.Text.Json namespace.

Utf8JsonReader

System.Text.Json.Utf8JsonReader is a high-performance, low allocation, forward-only reader for UTF-8 encoded JSON text, read from a ReadOnlySpan<byte>. The Utf8JsonReader is a foundational, low-level type, that can be leveraged to build custom parsers and deserializers. Reading through a JSON payload using the new Utf8JsonReader is 2x faster than using the reader from Json.NET. It does not allocate until you need to actualize JSON tokens as (UTF-16) strings.

This new API will include the following components:

  • In Preview 1: JSON reader (sequential access)
  • Coming next: JSON writer, DOM (random access), poco serializer, poco deserializer

Here is the basic reader loop for the Utf8JsonReader that can be used as a starting point:

using System.Text.Json;

public static void Utf8JsonReaderLoop(ReadOnlySpan<byte> dataUtf8)
{
    var json = new Utf8JsonReader(dataUtf8, isFinalBlock: true, state: default);

    while (json.Read())
    {
        JsonTokenType tokenType = json.TokenType;
        ReadOnlySpan<byte> valueSpan = json.ValueSpan;
        switch (tokenType)
        {
            case JsonTokenType.StartObject:
            case JsonTokenType.EndObject:
                break;
            case JsonTokenType.StartArray:
            case JsonTokenType.EndArray:
                break;
            case JsonTokenType.PropertyName:
                break;
            case JsonTokenType.String:
                string valueString = json.GetStringValue();
                break;
            case JsonTokenType.Number:
                if (!json.TryGetInt32Value(out int valueInteger))
                {
                    throw new FormatException();
                }
                break;
            case JsonTokenType.True:
            case JsonTokenType.False:
                bool valueBool = json.GetBooleanValue();
                break;
            case JsonTokenType.Null:
                break;
            default:
                throw new ArgumentException();
        }
    }

    dataUtf8 = dataUtf8.Slice((int)json.BytesConsumed);
    JsonReaderState state = json.CurrentState;
}

Utf8JsonWriter

System.Text.Json.Utf8JsonWriter provides a high-performance, non-cached, forward-only way to write UTF-8 encoded JSON text from common .NET types like String, Int32, and DateTime. Like the reader, the writer is a foundational, low-level type, that can be leveraged to build custom serializers. Writing a JSON payload using the new Utf8JsonWriter is 30-80% faster than using the writer from Json.NET and does not allocate.

Here is a sample usage of the Utf8JsonWriter that can be used as a starting point:

static int WriteJson(IBufferWriter<byte> output, long[] extraData)
{
    var json = new Utf8JsonWriter(output, state: default);

    json.WriteStartObject();

    json.WriteNumber("age", 15, escape: false);
    json.WriteString("date", DateTime.Now);
    json.WriteString("first", "John");
    json.WriteString("last", "Smith");

    json.WriteStartArray("phoneNumbers", escape: false);
    json.WriteStringValue("425-000-1212", escape: false);
    json.WriteStringValue("425-000-1213");
    json.WriteEndArray();

    json.WriteStartObject("address");
    json.WriteString("street", "1 Microsoft Way");
    json.WriteString("city", "Redmond");
    json.WriteNumber("zip", 98052);
    json.WriteEndObject();

    json.WriteStartArray("ExtraArray");
    for (var i = 0; i < extraData.Length; i++)
    {
        json.WriteNumberValue(extraData[i]);
    }
    json.WriteEndArray();

    json.WriteEndObject();

    json.Flush(isFinalBlock: true);

    return (int)json.BytesWritten;
}

The Utf8JsonWriter accepts IBufferWriter<byte> as the output location to synchronously write the json data into, and you as the caller need to provide a concrete implementation. The platform does not currently include an implementation of this interface. For an example of IBufferWriter<byte>, see https://gist.github.com/ahsonkhan/c76a1cc4dc7107537c3fdc0079a68b35.

JsonDocument

System.Text.Json.JsonDocument is built on top of the Utf8JsonReader. The JsonDocument provides the ability to parse JSON data and build a read-only Document Object Model (DOM) that can be queried to support random access and enumeration. The JSON elements that compose the data can be accessed via the JsonElement type which is exposed by the JsonDocument as a property called RootElement. The JsonElement contains the JSON array and object enumerators along with APIs to convert JSON text to common .NET types. Parsing a typical JSON payload and accessing all its members using the JsonDocument is 2-3x faster than Json.NET with very little allocations for data that is reasonably sized (i.e. < 1 MB).

Here is a sample usage of the JsonDocument and JsonElement that can be used as a starting point:

static double ParseJson()
{
    const string json = " [ { \"name\": \"John\" }, [ \"425-000-1212\", 15 ], { \"grades\": [ 90, 80, 100, 75 ] } ]";

    double average = -1;

    using (JsonDocument doc = JsonDocument.Parse(json))
    {
        JsonElement root = doc.RootElement;
        JsonElement info = root[1];

        string phoneNumber = info[0].GetString();
        int age = info[1].GetInt32();

        JsonElement grades = root[2].GetProperty("grades");

        double sum = 0;
        foreach (JsonElement grade in grades.EnumerateArray())
        {
            sum += grade.GetInt32();
        }

        int numberOfCourses = grades.GetArrayLength();
        average = sum / numberOfCourses;
    }

    return average;
}

Assembly Unloadability

Assembly unloadability is a new capability of AssemblyLoadContext. This new feature is largely transparent from an API perspective, exposed with just a few new APIs. It enables a loader context to be unloaded, releasing all memory for instantiated types, static fields and for the assembly itself. An application should be able to load and unload assemblies via this mechanism forever without experiencing a memory leak.

This new capability can be used for scenarios similar to:

  • Plugin scenarios where dynamic plugin loading and unloading is required.
  • Dynamically compiling, running and then flushing code. Useful for web sites, scripting engines, etc.
  • Loading assemblies for introspection (like ReflectionOnlyLoad), although MetadataLoadContext (released in Preview 1) will be a better choice in many cases.

For more information, see the Using Unloadability document.

Assembly unloading requires significant care to ensure that all references to managed objects from outside a loader context are understood and managed. When the loader context is requested to be unloaded, any outside references need to have been unreferenced so that the loader context is self-consistent only to itself.

Assembly unloadability was provided in the .NET Framework by Application Domains (AppDomains), which are not supported with .NET Core. AppDomains had both benefits and limitations compared to this new model. Consider this new loader model to be more flexible and higher performant when compared to AppDomains.

Windows Native Interop

Windows offers a rich native API, in the form of flat C APIs, COM, and WinRT. Since .NET Core 1.0, P/Invoke has been supported. Now with .NET Core 3.0, support for the ability to CoCreate COM APIs and Activate WinRT APIs has been added.

You can see an example of using COM with the Excel Demo source code.

Type: SequenceReader

In .NET Core 3.0, System.Buffers.SequenceReader has been added which can be used as a reader for ReadOnlySequence<T>. This allows easy, high performance, low allocation parsing of System.IO.Pipelines data that can cross multiple backing buffers.

The following example breaks an input Sequence into valid CR/LF delimited lines:

private static ReadOnlySpan<byte> CRLF => new byte[] { (byte)'\r', (byte)'\n' };

public static void ReadLines(ReadOnlySequence<byte> sequence)
{
    SequenceReader<byte> reader = new SequenceReader<byte>(sequence);

    while (!reader.End)
    {
        if (!reader.TryReadToAny(out ReadOnlySpan<byte> line, CRLF, advancePastDelimiter:false))
        {
            // Couldn't find another delimiter
            // ...
        }

        if (!reader.IsNext(CRLF, advancePast: true))
        {
            // Not a good CR/LF pair
            // ...
        }

        // line is valid, process
        ProcessLine(line);
    }
}

Type: MetadataLoadContext

The MetadataLoadContext type has been added that enables reading assembly metadata without affecting the caller’s application domain. Assemblies are read as data, including assemblies built for different architectures and platforms than the current runtime environment. MetadataLoadContext overlaps with the ReflectionOnlyLoad, which is only available in the .NET Framework.

MetdataLoadContext is available in the System.Reflection.MetadataLoadContext package. It is a .NET Standard 2.0 package.

The MetadataLoadContext exposes APIs similar to the AssemblyLoadContext type, but is not based on that type. Much like AssemblyLoadContext, the MetadataLoadContext enables loading assemblies within an isolated assembly loading universe. MetdataLoadContext APIs return Assembly objects, enabling the use of familiar reflection APIs. Execution-oriented APIs, such as MethodBase.Invoke, are not allowed and will throw InvalidOperationException.

The following sample demonstrates how to find concrete types in an assembly that implements a given interface:

var paths = new string[] {@"C:\myapp\mscorlib.dll", @"C:\myapp\myapp.dll"};
var resolver = new PathAssemblyResolver(paths);
using (var lc = new MetadataLoadContext(resolver))
{
    Assembly a = lc.LoadFromAssemblyName("myapp");
    Type myInterface = a.GetType("MyApp.IPluginInterface");
    foreach (Type t in a.GetTypes())
    {
        if (t.IsClass && myInterface.IsAssignableFrom(t))
            Console.WriteLine($"Class {t.FullName} implements IPluginInterface");
    }
}

Scenarios for MetadataLoadContext include design-time features, build-time tooling, and runtime light-up features that need to inspect a set of assemblies as data and have all file locks and memory freed after inspection is performed.

The MetadataLoadContext has a resolver class passed to its constructor. The resolver's job is to load an Assembly given its AssemblyName. The resolver class derives from the abstract MetadataAssemblyResolver class. An implementation of the resolver for path-based scenarios is provided with PathAssemblyResolver.

The MetadataLoadContext tests demonstrate many use cases. The Assembly tests are a good place to start.

TLS 1.3 & OpenSSL 1.1.1 on Linux

.NET Core will now take advantage of TLS 1.3 support in OpenSSL 1.1.1, when it is available in a given environment. There are multiple benefits of TLS 1.3, per the OpenSSL team:

  • Improved connection times due to a reduction in the number of round trips required between the client and server.

  • Improved security due to the removal of various obsolete and insecure cryptographic algorithms and encryption of more of the connection handshake.

.NET Core 3.0 Preview 1 is capable of utilizing OpenSSL 1.1.1, OpenSSL 1.1.0, or OpenSSL 1.0.2 (whatever the best version found is, on a Linux system). When OpenSSL 1.1.1 is available the SslStream and HttpClient types will use TLS 1.3 when using SslProtocols.None (system default protocols), assuming both the client and server support TLS 1.3.

The following sample demonstrates .NET Core 3.0 Preview 1 on Ubuntu 18.10 connecting to https://www.cloudflare.com:

using System;
using System.Net.Security;
using System.Net.Sockets;
using System.Threading.Tasks;

namespace tlstest
{
    class Program
    {
        static async Task Main()
        {
            using (TcpClient tcpClient = new TcpClient())
            {
                string targetHost = "www.cloudflare.com";

                await tcpClient.ConnectAsync(targetHost, 443);

                using (SslStream sslStream = new SslStream(tcpClient.GetStream()))
                {
                    await sslStream.AuthenticateAsClientAsync(targetHost);
                    await Console.Out.WriteLineAsync($"Connected to {targetHost} with {sslStream.SslProtocol}");
                }
            }
        }
    }
}
user@comp-ubuntu1810:~/tlstest$ dotnet run
Connected to www.cloudflare.com with Tls13
user@comp-ubuntu1810:~/tlstest$ openssl version
OpenSSL 1.1.1  11 Sep 2018

Important

Windows and macOS do not yet support TLS 1.3. .NET Core 3.0 will support TLS 1.3 on these operating systems when support becomes available.

Cryptography

Support has been added for AES-GCM and AES-CCM ciphers, implemented via System.Security.Cryptography.AesGcm and System.Security.Cryptography.AesCcm. These algorithms are both Authenticated Encryption with Association Data (AEAD) algorithms, and the first Authenticated Encryption (AE) algorithms added to .NET Core.

The following code demonstrates using AesGcm cipher to encrypt and decrypt random data.

The code for AesCcm would look almost identical (only the class variable names would be different).

// key should be: pre-known, derived, or transported via another channel, such as RSA encryption
byte[] key = new byte[16];
RandomNumberGenerator.Fill(key);

byte[] nonce = new byte[12];
RandomNumberGenerator.Fill(nonce);

// normally this would be your data
byte[] dataToEncrypt = new byte[1234];
byte[] associatedData = new byte[333];
RandomNumberGenerator.Fill(dataToEncrypt);
RandomNumberGenerator.Fill(associatedData);

// these will be filled during the encryption
byte[] tag = new byte[16];
byte[] ciphertext = new byte[dataToEncrypt.Length];

using (AesGcm aesGcm = new AesGcm(key))
{
    aesGcm.Encrypt(nonce, dataToEncrypt, ciphertext, tag, associatedData);
}

// tag, nonce, ciphertext, associatedData should be sent to the other part

byte[] decryptedData = new byte[ciphertext.Length];

using (AesGcm aesGcm = new AesGcm(key))
{
    aesGcm.Decrypt(nonce, ciphertext, tag, decryptedData, associatedData);
}

// do something with the data
// this should always print that data is the same
Console.WriteLine($"AES-GCM: Decrypted data is{(dataToEncrypt.SequenceEqual(decryptedData) ? "the same as" : "different than")} original data.");

Cryptographic Key Import/Export

.NET Core 3.0 Preview 1 supports the import and export of asymmetric public and private keys from standard formats, without needing to use an X.509 certificate.

All key types (RSA, DSA, ECDsa, ECDiffieHellman) support the X.509 SubjectPublicKeyInfo format for public keys, and the PKCS#8 PrivateKeyInfo and PKCS#8 EncryptedPrivateKeyInfo formats for private keys. RSA additionally supports PKCS#1 RSAPublicKey and PKCS#1 RSAPrivateKey. The export methods all produce DER-encoded binary data, and the import methods expect the same. If a key is stored in the text-friendly PEM format, the caller will need to base64-decode the content before calling an import method.

using System;
using System.IO;
using System.Security.Cryptography;

namespace rsakeyprint
{
    class Program
    {
        static void Main(string[] args)
        {
            using (RSA rsa = RSA.Create())
            {
                byte[] keyBytes = File.ReadAllBytes(args[0]);
                rsa.ImportRSAPrivateKey(keyBytes, out int bytesRead);

                Console.WriteLine($"Read {bytesRead} bytes, {keyBytes.Length-bytesRead} extra byte(s) in file.");
                RSAParameters rsaParameters = rsa.ExportParameters(true);
                Console.WriteLine(BitConverter.ToString(rsaParameters.D));
            }
        }
    }
}
user@comp-ubuntu1810:~/rsakeyprint$ echo Making a small key to save on screen space.
Making a small key to save on screen space.
user@comp-ubuntu1810:~/rsakeyprint$ openssl genrsa 768 | openssl rsa -outform der -out rsa.key
Generating RSA private key, 768 bit long modulus (2 primes)
..+++++++
........+++++++
e is 65537 (0x010001)
writing RSA key
user@comp-ubuntu1810:~/rsakeyprint$ dotnet run rsa.key
Read 461 bytes, 0 extra byte(s) in file.
0F-D0-82-34-F8-13-38-4A-7F-C7-52-4A-F6-93-F8-FB-6D-98-7A-6A-04-3B-BC-35-8C-7D-AC-A5-A3-6E-AD-C1-66-30-81-2C-2A-DE-DA-60-03-6A-2C-D9-76-15-7F-61-97-57-
79-E1-6E-45-62-C3-83-04-97-CB-32-EF-C5-17-5F-99-60-92-AE-B6-34-6F-30-06-03-AC-BF-15-24-43-84-EB-83-60-EF-4D-3B-BD-D9-5D-56-26-F0-51-CE-F1
user@comp-ubuntu1810:~/rsakeyprint$ openssl rsa -in rsa.key -inform der -text -noout | grep -A7 private
privateExponent:
    0f:d0:82:34:f8:13:38:4a:7f:c7:52:4a:f6:93:f8:
    fb:6d:98:7a:6a:04:3b:bc:35:8c:7d:ac:a5:a3:6e:
    ad:c1:66:30:81:2c:2a:de:da:60:03:6a:2c:d9:76:
    15:7f:61:97:57:79:e1:6e:45:62:c3:83:04:97:cb:
    32:ef:c5:17:5f:99:60:92:ae:b6:34:6f:30:06:03:
    ac:bf:15:24:43:84:eb:83:60:ef:4d:3b:bd:d9:5d:
    56:26:f0:51:ce:f1

PKCS#8 files can be inspected with the System.Security.Cryptography.Pkcs.Pkcs8PrivateKeyInfo class.

PFX/PKCS#12 files can be inspected and manipulated with System.Security.Cryptography.Pkcs.Pkcs12Info and System.Security.Cryptography.Pkcs.Pkcs12Builder, respectively.

SerialPort for Linux

.NET Core 3.0 now supports System.IO.Ports.SerialPort on Linux.

Previously, .NET Core only supported using the SerialPort type on Windows.

More BCL Improvements

The Span<T>, Memory<T>, and related types that were introduced in .NET Core 2.1, have been optimized in .NET Core 3.0. Common operations such as span construction, slicing, parsing, and formatting now perform better.

Additionally, types like String have seen under-the-cover improvements to make them more efficient when used as keys with Dictionary<TKey, TValue> and other collections. No code changes are required to benefit from these improvements.

The following improvements are also new in .NET Core 3 Preview 1:

  • Brotli support built in to HttpClient
  • ThreadPool.UnsafeQueueWorkItem(IThreadPoolWorkItem)
  • Unsafe.Unbox
  • CancellationToken.Unregister
  • Complex arithmetic operators
  • Socket APIs for TCP keep alive
  • StringBuilder.GetChunks
  • IPEndPoint parsing
  • RandomNumberGenerator.GetInt32

Tiered compilation

Tiered compilation is on by default with .NET Core 3.0. It is a feature that enables the runtime to more adaptively use the Just-In-Time (JIT) compiler to get better performance, both at startup and to maximize throughput.

This feature was added as an opt-in feature in .NET Core 2.1 and then was enabled by default in .NET Core 2.2 Preview 2. Subsequently, it has been reverted back to opt in with the .NET Core 2.2 release.

ARM64 Linux support

Support has been added for ARM64 for Linux. The primary use case for ARM64 is currently with IoT scenarios.

Alpine, Debian and Ubuntu Docker images are available for .NET Core for ARM64.

Please check .NET Core ARM64 Status for more information.

Note

ARM64 Windows support isn't yet available.

Install .NET Core 3.0 Previews on Linux with Snap

Snap is the preferred way to install and try .NET Core previews on Linux distributions that support Snap.

After configuring Snap on your system, run the following command to install the .NET Core SDK 3.0 Preview SDK.

sudo snap install dotnet-sdk --beta --classic

When .NET Core in installed using the Snap package, the default .NET Core command is dotnet-sdk.dotnet, as opposed to just dotnet. The benefit of the namespaced command is that it will not conflict with a globally installed .NET Core version you may have. This command can be aliased to dotnet with:

sudo snap alias dotnet-sdk.dotnet dotnet

Some distros require an additional step to enable access to the SSL certificate. See our Linux Setup for details.

GPIO Support for Raspberry Pi

Two new packages have been released to NuGet that you can use for GPIO programming.

The GPIO Packages includes APIs for GPIO, SPI, I2C and PWM devices. The IoT bindings package includes device bindings for various chips and sensors, the same ones available at dotnet/iot - src/devices.

The updated serial port APIs that were announced as part of .NET Core 3.0 Preview 1 are not part of these packages but are available as part of the .NET Core platform.

Platform Support

.NET Core 3 will be supported on the following operating systems:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2012 R2 SP1+
  • macOS: 10.12+
  • RHEL: 6+
  • Fedora: 26+
  • Ubuntu: 16.04+
  • Debian: 9+
  • SLES: 12+
  • openSUSE: 42.3+
  • Alpine: 3.8+

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Windows and Linux
  • ARM64 on Linux

For Linux, ARM32 is supported on Debian 9+ and Ubuntu 16.04+. For ARM64, it is the same as ARM32 with the addition of Alpine 3.8. These are the same versions of those distros as is supported for X64.

Docker images for .NET Core 3.0 are available at microsoft/dotnet on Docker Hub. Microsoft is currently in the process of adopting Microsoft Container Registry (MCR) and it is expected that the final .NET Core 3.0 images will only be published to MCR.