12 KiB
New memory model migration guide
The new memory model is Experimental. It's not production-ready and may be changed at any time. Opt-in is required (see the details below), and you should use it only for evaluation purposes. We would appreciate your feedback on it in YouTrack.
In the new memory model (MM), we're lifting restrictions on object sharing: there's no need to freeze objects to share them between threads anymore.
In particular:
- Top-level properties can be accessed and modified by any thread without using
@SharedImmutable. - Objects passing through interop can be accessed and modified by any thread without freezing them.
Worker.executeAfterno longer requiresoperationto be frozen.Worker.executeno longer requiresproducerto return an isolated object subgraph.- Reference cycles containing
AtomicReferenceandFreezableAtomicReferencedo not cause memory leaks.
A few precautions:
- As with the previous MM, memory is not reclaimed eagerly: an object is reclaimed only when GC happens. This extends to Swift/ObjC objects that crossed interop boundary into Kotlin/Native.
AtomicReferencefromkotlin.native.concurrentstill requires freezing thevalue. Instead, you can useFreezableAtomicReferenceorAtomicReffromatomicfu. Note thatatomicfuhas not reached 1.x yet.deiniton Swift/ObjC objects (and the objects they refer to) will be called on a different thread if these objects cross interop boundary into Kotlin/Native.- When calling Kotlin suspend functions from Swift, completion handlers might be called on threads other than the main.
The new MM also brings another set of changes:
- Global properties are initialized lazily when the file they are defined in is first accessed. Previously global properties were initialized at the program startup. This is in line with Kotlin/JVM.
As a workaround, properties that must be initialized at the program start can be marked with
@EagerInitialization. Before using, consult the@EagerInitializationdocumentation. by lazy {}properties support thread-safety modes and do not handle unbounded recursion. This is in line with Kotlin/JVM.- Exceptions that escape
operationinWorker.executeAfterare processed like in other parts of the runtime: by trying to execute a user-defined unhandled exception hook or terminating the program if the hook was not found or failed with an exception itself.
Enable the new MM
Update the Kotlin/Native compiler
Update to Kotlin/Native 1.6.0-M1-139 and enable dev repositories:
// build.gradle.kts
repositories {
maven("https://maven.pkg.jetbrains.space/kotlin/p/kotlin/temporary")
}
// settings.gradle.kts
pluginManagement {
repositories {
maven("https://maven.pkg.jetbrains.space/kotlin/p/kotlin/temporary")
gradlePluginPortal()
}
}
Switch to the new MM
Add the compilation flag -Xbinary=memoryModel=experimental. In Gradle, you can alternatively do one of the following:
- In
gradle.properties:
kotlin.native.binary.memoryModel=experimental
- In a Gradle build script:
// build.gradle.kts
kotlin.targets.withType(KotlinNativeTarget::class.java) {
binaries.all {
binaryOptions["memoryModel"] = "experimental"
}
}
If kotlin.native.isExperimentalMM() returns true, you've successfully enabled the new MM.
Update the libraries
To take full advantage of the new MM, we released new versions of the following libraries:
kotlinx.coroutines:1.5.1-new-mm-dev2at https://maven.pkg.jetbrains.space/public/p/kotlinx-coroutines/maven- No freezing. Every common primitive (Channels, Flows, coroutines) works through
Workerboundaries. - Unlike the
native-mtversion, library objects are transparent forfreeze. For example, if you freeze a channel, all of its internals will get frozen, so it won't work as expected. In particular, this can happen when freezing something that captures a channel. Dispatchers.Defaultis backed by a pool ofWorkers on Linux and Windows and by a global queue on Apple targets.newSingleThreadContextto create a coroutine dispatcher that is backed by aWorker.newFixedThreadPoolContextto create a coroutine dispatcher backed by a pool ofNWorkers.Dispatchers.Mainis backed by the main queue on Darwin and by a standaloneWorkeron other platforms. Don't useDispatchers.Mainin unit tests because, in unit tests, nothing processes the main thread queue.
- No freezing. Every common primitive (Channels, Flows, coroutines) works through
ktor:1.6.2-native-mm-eap-196at https://maven.pkg.jetbrains.space/public/p/ktor/eap
Using previous versions of the libraries
In your project, you can continue using previous versions of the libraries (including native-mt for kotlinx.coroutines). The existing code will work just like with the previous MM. The only known exception is creating a Ktor HTTP client with the default engine using HttpClient(). In this case, you get the following error:
kotlin.IllegalStateException: Failed to find HttpClientEngineContainer. Consider adding [HttpClientEngine] implementation in dependencies.
To fix this, specify the engine explicitly by replacing HttpClient() with HttpClient(Ios) or other supported engines
(see the Ktor documentation for more details).
Other libraries might also have compatibility issues. If you encounter any, report to the library authors.
Known issues:
Performance issues
For the first preview, we're using the simplest scheme for garbage collection: single-threaded stop-the-world mark-and-sweep algorithm, which is triggered after enough functions, loop iterations, and allocations were executed. This greatly hinders the performance, and one of our top priorities now is addressing these performance issues.
We don't have nice instruments to monitor the GC performance yet. So far, diagnosing requires looking at GC logs. To enable the logs, add the compilation flag -Xruntime-logs=gc=info in a Gradle build script:
// build.gradle.kts
kotlin.targets.withType(KotlinNativeTarget::class.java) {
binaries.all {
freeCompilerArgs += "-Xruntime-logs=gc=info"
}
}
Currently, the logs are only printed to stderr. Note that the exact contents of the logs will change.
The list of known performance issues:
- Since the collector is single-threaded stop-the-world, the pause time of every thread linearly depends on the number of objects in the heap. The more objects that are kept alive, the longer the pauses are. Long pauses on the main thread can result in laggy UI event handling. Both the pause time and the number of objects in the heap are printed to the logs for each GC cycle.
- Being stop-the-world also means that all threads with Kotlin/Native runtime active on them need to synchronize simultaneously for the collection to begin. This also affects the pause time.
- There is a complicated relationship between Swift/ObjC objects and their Kotlin/Native counterparts, which causes Swift/ObjC objects to linger longer than necessary. It means that their Kotlin/Native counterparts are kept in the heap longer, contributing to the slower collection time. This typically doesn't happen, but in some corner cases, for example, when a long loop creates several temporary objects that cross the Swift/ObjC interop boundary on each iteration (for example, calling a Kotlin callback from a loop in Swift or vice versa).
In the logs, there's a number of stable refs in the root set. If this number keeps growing, it may indicate that the Swift/ObjC objects are not being freed when they should. Try putting
autoreleasepoolaround loop bodies (both in Swift/ObjC and Kotlin) that do interop calls. - (YouTrack issue) Our GC triggers do not adapt to the workload: collections may be requested far more frequently than necessary, which means that GC time may dominate useful application run time and pause the threads more frequently than needed.
This manifests in time between cycles being close (or even less) than the pause time. Both of these numbers are printed to the logs.
Try increasing
kotlin.native.internal.GC.thresholdandkotlin.native.internal.GC.thresholdAllocationsto force GC to happen less often. Note that the exact meaning ofthresholdandthresholdAllocationsmay change in the future. - Freezing is currently implemented suboptimally: internally, a separate memory allocation may occur for each frozen object (this recursively includes the object subgraph), which puts unnecessary pressure on the heap.
- Unterminated
Workers and unconsumedFutures have objects pinned to the heap, contributing to the pause time. Like Swift/ObjC interop, this also manifests in a growing number of stable refs in the root set. To mitigate:- Look for calls to
Worker.executewith the resultingFutureobjects that are never consumed usingFuture.consumeorFuture.result. Make sure to either consume allFutureobjects or replace these calls withWorker.executeAfterinstead. - Look for
Workers that wereWorker.started, but never stopped viaWorker.requestTermination()(also, note that this call also returns aFuture). - Make sure that
executeandexecuteAfterare only called onWorkers that wereWorker.started or if the receivingWorkermanually processes events withWorker.processQueue.
- Look for calls to
We measured performance regressions with a slowdown up to a factor of 5. If you observe anything more significant, report to this performance meta issue.
Known bugs
- Compiler caches are not supported, so the compilation of debug binaries will be slower.
- Freezing machinery is not thread-safe: if an object is being frozen on one thread, and its subgraph is being modified on another, by the end, the object will be frozen, but some subgraph of it might be not.
- Documentation is not updated to reflect changes for the new MM.
- There's no application state handling on iOS: the collector will not be throttled down if the application goes into the background. However, the collection is not forced upon going into the background, which leaves the application with a larger memory footprint than necessary, making it a more likely target to be terminated by the OS.
- WASM (or any target that doesn't have pthreads) is not supported with the new MM.
Workarounds
Unexpected object freezing
Some libraries might not be ready for the new MM and freeze-transparency of kotlinx.coroutines, so unexpected InvalidMutabilityException or FreezingException might appear.
To workaround such cases, we added a freezing binary option that disables freezing fully (disabled) or partially (explicitOnly).
The former disables the freezing mechanism at runtime (thus, making it a no-op), while the latter disables automatic freezing of
@SharedImmutable globals, but keeps direct calls to freeze fully functional.
To enable this, add the compilation flag -Xbinary=freezing=disabled. In Gradle, you can alternatively do one of the following:
- In
gradle.properties:
kotlin.native.binary.freezing=disabled
- In a Gradle build script:
// build.gradle.kts
kotlin.targets.withType(KotlinNativeTarget::class.java) {
binaries.all {
binaryOptions["freezing"] = "disabled"
}
}
Note
: this option works only with the new MM.
If you want not just workaround the problem, but track down the source of the exceptions, then ensureNeverFrozen is your best friend.
Feedback
If you encounter performance regressions with a slowdown of more than a factor of 5, report to this performance meta issue.
You can report other issues with migration to the new MM to this meta issue.