قالب وردپرس درنا توس
Home / Tips and Tricks / How does memory management work in C #? – CloudSavvy IT

How does memory management work in C #? – CloudSavvy IT



Microsoft .NET.

Compared to C ++, C #’s junk collector acts like magic, and you can very easily write code without worrying about the underlying memory. But if you care about performance, knowing how .NET runtime manages its RAM can help you write better code.

Value types versus reference types

There are two types of .NET that directly affect how the underlying memory is handled.

Value types are primitive types with fixed sizes such as int, bool, float, doubleetc. They are passed by value, which means if you call someFunction(int arg), the argument is copied and sent as a new location in memory.

Under the hood, (usually) value types are stored stack. This mostly applies to local variables, and there are many exceptions where they are instead stored on the pile. But in any case, the place in memory where the value type is the actual value of that variable.

The stack is just a special place in the memory, initialized with a default value but can be expanded. The bundle is a Last-in, First-out (LIFO) data structure. You can think of it as a bucket ̵

1; variables are added to the top of the bucket, and when they go out of range, NET reaches out into the bucket and removes them one at a time until it reaches the bottom.

Last-in, First-out (LIFO) data structure.

The stack is much faster, but it’s still just a place in RAM, not a special place in the CPU cache (even if it’s smaller than the pile, and as such it’s very likely that it’s hot in the cache, which helps performance).

The stack gets most of its performance from its LIFO structure. When you call a function, all variables defined in that function are added to the stack. When that function returns and these variables go out of range, the bundle is cleared of everything as the function placed on it. Run time handles this with stack frames, which define memory blocks for different functions. Stack assignments are extremely fast, as it only writes a single value to the end of the stack frame.

Bar frames define memory blocks for different functions.  Stack assignments are extremely fast.

This is also where the term “StackOverflow” comes from, which results when a function contains too many nested method calls and fills up the entire stack.

Reference types, however, are either too large, do not have fixed sizes or live too long to be on the stack. Usually these have the form of objects and classes that have been instantiated, but they also include matrices and strings, which can vary in size.

Reference types that instances of classes are often initialized with new keywords, which create a new instance of the class and return a reference to it. You can set this to a local variable, which actually uses the stack to store the reference to the location on the pile.

The pile can expand and fill until the computer runs out of memory, making it great for storing a lot of data. But it is disorganized, and in C # it has to be handled with garbage to function properly. Heap allocations are also slower than stack allocations, although they are still quite fast.

Heap allocations are also slower than stack allocations.

However, there are a number of exceptions to these rules, otherwise the value and reference types would be called “stack types” and “heap types.”

  • External variables of lambda functions, local variables of IEnumerator blocks and local variables of async methods are all stored on the pile.
  • Value type fields of classes are long-lasting variables and are always stored on the pile. They are also wrapped in a reference type and stored next to that reference type.
  • Static class fields are also always stored on the pile.
  • Custom structures are value types, but they can contain reference types such as lists and strings, which are stored on the pile as normal. Create a copy of the structure creates a new copy and allocation of all reference types on the pile.

The most notable exception to the rule that “reference types are on the pile” is the use of stackalloc with Span, which manually allocates a memory block on the stack for a temporary matrix that will be cleaned from the stack as normal when it goes out of range. This bypasses a relatively expensive allocation of the pile and puts less pressure on the garbage collector during the process. It can be a lot more efficient, but it’s a bit advanced, so if you want to learn more about it, you can read this guide on how to use it properly without causing a StackOverflow exception.

What is garbage?

The bundle is very organized, but the pile is messy. Without something to handle it, things on the pile are not cleared automatically, which leads to your application running out of memory because it is never released.

Of course, this is a problem, which is why the garbage collector exists. It runs on a background thread and periodically scans your application for references that are no longer on the stack, indicating that the program has stopped caring about the data being referenced. .NET runtime can come in and clean up and shift memory in the process to make the pile more organized.

Garbage collectors run on a background thread and periodically scan your application for references that are no longer on the stack.

However, this magic comes at a cost – the junk collection is slow and expensive. It runs on a background thread, but there is a period where program execution must be stopped to run garbage. This is the trade-off that comes with programming in C #; all you can do is try to minimize the junk you create.

In non-garbage languages, you have to manually clean up after yourself, which in many cases is faster but more annoying for the programmer. So in a way, a garbage is like a Roomba, which cleans up your floors automatically, but is slower than just standing up and vacuuming.


Source link