TheDeveloperBlog.com

Home | Contact Us

C-Sharp | Java | Python | Swift | GO | WPF | Ruby | Scala | F# | JavaScript | SQL | PHP | Angular | HTML

Windows File Cache: Performance

This is a summary of a technical document published by Microsoft called Windows File Cache Performance and Tuning.

File caches prevent disk reads. How does the file system cache work in IIS and Windows Server?

We determine the state of the art with OS file system caches.

With "File Cache Performance and Tuning" we explain the file system cache.

Overview. To start, the file cache is essential to the performance of Windows and IIS. It does however introduce another layer of complexity to the operating system. The file system cache operates transparently to your applications.

When sections of files on the disk are referenced by applications, they are mapped into virtual memory. This is performed by the Cache Manager in Windows. This is transparent—only the Cache Manager is alerted to this happening.

The memory used for the file cache is treated the same as other memory sections by Windows. This means the same algorithms and best practices apply to files as to in-memory data structures.

Note: The file system cache is remarkable because its implementation is hidden from the consumer applications. This is encapsulation.

Frequently accessed files tend to remain in memory longer than files that are not commonly used. This is similar to the concept of sliding expiration in ASP.NET. When an item is accessed, its expiration time is reset.

Memory versus hard disks. Memory is hugely faster than hard disk accesses. This is the principle that underlies the file cache. If hard disks were to become extremely fast—such as advanced solid state drives—this might change.

Memory Hierarchy

Note: The file cache is not always beneficial. Sometimes a file is only read once. The file cache would never register a hit.

Optimizations. The file cache in Windows 2000 and later and IIS uses prefetching optimizations for file sections. A file that is usually accessed after another file can be put in the cache in anticipation of its opening. It may never be hit.

Deferred writes. The cache uses an implementation termed "deferred write-back cache." This means that file system writes "accumulate" in memory—they are not individually written to the disk.

Files versus segments: The file cache in Windows 2000 uses the concept of "active segments." Segments are parts of files.

Note: This gives the cache a fine-grained feel of the data being accessed, and what to keep in memory.

Also: We cannot adjust all these attributes. Windows 2000 and beyond do not expose all of these "knobs" to the administrator or users.

How does a read-ahead work? It uses heuristics to anticipate which segments to put into virtual storage. If file B is always accessed after file A, then whenever file A is opened, file B will be opened ("sequentially accessed").

The image shows some performance metrics. It tells you how many Read Ahead operations, Copy Reads, MDL Reads, Pin Reads, Data Flushes, and Data Maps are occurring and how frequently. Many parts of this article touch on these metrics.

What is meant by transparent? When you develop a Windows application, you write it as though it is directly working on files. You don't invoke the file cache yourself. The term "transparent" means that the file cache is hidden.

VMS. Windows 2000 has a file cache system in many ways similar to UNIX. This is because both operating systems borrowed ideas from VMS. Dave Cutler, a developer of VMS, also worked on Windows NT.

And: The OS/2 operating system, developed in part by Microsoft, also preceded Windows NT with these ideas.

Dave Cutler: Wikipedia

Memory. How much memory does the file cache use? Usually a lot. In Windows 2000 and higher, this is determined dynamically. In the performance monitor, the cache performance object will report this value as system cache resident bytes.

Sections are stored in the virtual memory instead of logical files. The size of each section is 256 KB. On file servers and IIS machines, the file cache is the greatest part of the memory size.

However: The size is carefully determined by logic, which negates the need to tweak it yourself.

Note: The one important thing to improve performance is monitor IIS so it has enough RAM to use for the file cache.

You can disable file caching, but it's hard to do. You would have to provide low-level file IO routines to do this. As a .NET developer, this would be likely impossible in managed code of any language.

Usage. File servers like IIS will use the file system cache for every file they serve. Client computers will also use file caches for the files they download. So the same files will be cached in many spots using the same algorithms.

Google Chrome. The article I read does not factor in newer programs like Chrome that use aggressive caching in memory. I expect that Google Chrome and Firefox use many custom caches.

So: Caching is even more prevalent today. This is evident in Google Chrome, which uses extensive memory caches.

Resource duplication. In a closed system, it would be ideal to eliminate all of the double-caching to save computer resources. Methods of doing this would be interesting to develop and observe.

File cache is global. Windows 2000 and newer versions make it hard to see what applications are doing with the cache. As stated in the start, the file cache introduces another level of complexity, and this reflects that.

Measurements. A logical read is when an application specifies to read a file. However, the file cache "diverts" this and redirects the request to the virtual cache. The stats reflect logical reads.

Tip: The file cache works transparently and will "transform" what the application assumes is a disk read into a virtual memory read.

And: The cache can do this because it is encapsulated and it overrides the IO interfaces.

The cache makes benchmarks harder to perform and repeat. This is because it introduces a level of transparency and complexity. To get around this, testers use measurements of "cold start" and "warm start."

Caveat emptor: The document helpfully provides this warning, which means "buyer beware." It cautions you to be careful.

Caveat emptor: Wikipedia

Copy Interface explanation. The Copy Interface is how Microsoft implemented the file cache in a backward compatible way. This means that both the OS and the application have file buffers.

Two places: Data exists in two places. The application provides its buffer to the OS, which also has the data in a buffer.

Fast Copy Interface explanation. There is also a Fast Copy Interface. This is the same as the Copy Interface, but avoids the "initial call" to the file system. Fast Copy must know that the actual file system won't be needed.

How does Lazy Write work? It "accumulates" write operations in memory. This is similar to the Memento pattern in object-oriented programming. It is fascinating and must have been difficult to implement well.

Note: When a server is busy, it can accumulate many write requests. It must assert itself and force the writes to be performed.

Flushes: This is called "threshold-triggered lazy write flushes." This is a way to avoid edge cases and problems under severe load.

Dirty cache pages. File sections in the cache that have been written to in memory are termed "dirty cache pages." There is also a way for the OS to write dirty caches to disk immediately (called write-through caching).

Mapping Interface. This is another interface in Windows 2000. It is more efficient than the Copy Interface because it eliminates the need to store two copies of the data. An application that uses Mapping Interface receives a pointer to the data.

Mapping Interface problems. This interface presents a different set of problems. For one, the Cache Manager cannot monitor what is happening to the data. This means it cannot purge pages from the cache based on heuristics.

Applications signal usages. The Mapping Interface contract requires the applications to indicate when they are done with the data from the disk. Then the file segments can be trimmed from the virtual store.

Pinning: This is a term that refers to a segment of memory being marked as not to be trimmed or rearranged.

And: When an application is using the Mapping Interface, the data must be pinned so it won't be cleared.

When unpinning happens. When the application is done with the file segment, it is unpinned and therefore able to be removed from the file cache. The lazy writer thread will then flush the dirty pages to the physical disk.

What uses Mapping. I was interested to read that the Windows NTFS file system uses the Mapping Interface for file metadata. This refers to the icons, file types, dates modified and created, and file sizes.

Tip: Mapping "guarantees the integrity" of the metadata, which is critical. It also taps the benefits of the file cache.

Pin Reads/sec:

Number of reads to pinned data per second.

Pin Read Hit percent:

How effective the pinned segment cache is.

What other file cache interfaces exist? There is an interface called MDL (for Memory Descriptor List). From the document, I think it is less common and not important for me to delve into.

Tip: An interface is a contract between two programs or resources. In the C# language, an interface is a contract between two objects.

Interface

Performance. The document presents benchmarks. The actual data are several years old, but the methods are useful. The authors used a program called "Probe," which ran artificial file IO tests. It randomly accessed segments of a 16 MB file.

The book Microsoft included with Windows NT claimed that Windows' cache is self-tuning. Later this claim was removed from the software. Things like this likely have contributed to programmers' dislike of Microsoft.

Not self-tuning: The experiments showed that the file cache in Windows 2000 is not self-tuning. "Knobs" to adjust parameters could help.

How effective is Lazy Write? It is effective. When Windows 2000 was subjected to 550 logical disk writes per second, it only performed 8.5 physical writes. This shows that the Lazy Write optimization is extremely useful.

Discussion. We should not rely on the Windows and IIS file caches to meet every need. But the file cache is a critical optimization. And we must be careful not to duplicate work done by it. This could reduce performance.

Caching static files. One question that I had was whether it is useful to store a static file in memory using C# source code. My reading of this article indicates that it is not useful to cache static files in C# code.

Tip: It is usually not useful to try to implement your own file cache. The file cache uses sophisticated (and fast) low-level algorithms.

ASP.NET questions. I have seen questions about whether it is worthwhile to cache an entire PDF in ASP.NET. I think it is not worthwhile. Read the PDF off of the disk, and let IIS cache the file.

File Cache Performance and Tuning: Microsoft

Note: Thanks to Lev Elbert for writing in with a correction on the history of Windows NT and Windows 2000.

Summary. The file cache is used to store recently accessed files in memory, and this accelerates further accesses. It is useful for performance optimization. We referenced an important document about file caching in Windows.


Related Links

Adjectives Ado Ai Android Angular Antonyms Apache Articles Asp Autocad Automata Aws Azure Basic Binary Bitcoin Blockchain C Cassandra Change Coa Computer Control Cpp Create Creating C-Sharp Cyber Daa Data Dbms Deletion Devops Difference Discrete Es6 Ethical Examples Features Firebase Flutter Fs Git Go Hbase History Hive Hiveql How Html Idioms Insertion Installing Ios Java Joomla Js Kafka Kali Laravel Logical Machine Matlab Matrix Mongodb Mysql One Opencv Oracle Ordering Os Pandas Php Pig Pl Postgresql Powershell Prepositions Program Python React Ruby Scala Selecting Selenium Sentence Seo Sharepoint Software Spellings Spotting Spring Sql Sqlite Sqoop Svn Swift Synonyms Talend Testng Types Uml Unity Vbnet Verbal Webdriver What Wpf