Performance Tips

2018-10-29  本文已影响0人  ngugg

相关链接:
https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/PerformanceTips/PerformanceTips.html#//apple_ref/doc/uid/TP40010672-CH7-SW1

Relative to other operations, accessing files on disk is one of the slowest operations a computer can perform. Depending on the size and number of files, it can take anywhere from a few milliseconds to several minutes to read files from a disk-based hard drive. Make sure your code performs as efficiently as possible under even light to moderate workloads.

If your app slows down or becomes less responsive when it starts working with files, use the Instruments app to gather some baseline metrics. Instruments show you how much time your app spends operating on files and helps monitor various file-related activity. As you fix each problem, run your code in Instruments again and record the results, so that you can verify whether your changes worked.

Potential Problem Areas and Fixes

Look for these possible problem areas:
寻找这些可能的问题领域:

General Recommendations

These recommendations can help improve your file system related performance. As with all tips, measure performance before and after so that you can verify optimizations.
这些建议有助于提高与文件系统相关的性能。 与所有提示一样,请在之前和之后测量性能,以便验证优化。

Deciding When to Use File System Caching or Mapped I/O

Disk caching can be a good way to accelerate access to file data, but its use is not appropriate in every situation. Caching increases the memory footprint of your app and if used inappropriately can be more expensive than simply reloading data from the disk.

Caching is most appropriate for files you plan to access multiple times. If you have files that you intend to use only once, either disable the caches or map the file into memory.

Disabling File System Caching

When reading data that you won’t need again soon, such as when streaming a large multimedia file, tell the file system not to add that data to the file-system caches. Disable file system caching for files being read once and discarded by passing the DataReadingUncached option to init(contentsOfURL:options:). By default, the system maintains a buffer cache with the data most recently read from disk. This disk cache is most effective when it contains frequently used data. If you leave file caching enabled while streaming a large multimedia file, you can quickly fill up the disk cache with data you won’t use again. Even worse, this process is likely to push other data out of the cache that might have benefited from being there.

Note: For reading uncached data, it is recommended that you use 4K-aligned buffers. This gives the system more flexibility in how it loads the data into memory and can result in faster load times.

Using Mapped I/O Instead of Caching

For data read randomly from a file, you can sometimes improve performance by mapping that file directly into your app’s virtual memory space. File mapping is a programming convenience for files you want to access with read-only permissions. It lets the kernel take advantage of the virtual memory paging mechanism to read the file data only when it is needed. You can also use file mapping to overwrite existing bytes in a file; however, you cannot extend the size of the file using this technique. Mapped files bypass the system disk caches, so only one copy of the file is stored in memory.

Important: If you map a file into memory and the file becomes inaccessible—because the disk containing the file was ejected or the network server containing the file is unmounted—your app will crash with a SIGBUS error. Your app can also crash if you map a file into memory, that file gets truncated, and you attempt to access data at a range that not longer exists.

For more information about mapping files into memory, see File System Advanced Programming Topics.

Working with Zero-Filling

For security reasons, file systems are supposed to zero out areas on disk when the data from those areas is allocated to a file. This behavior prevents data left over from a previously deleted file from being included with the new file.

For both reading and writing operations, the system delays the writing of zeroes until the last possible moment. When you close a file after writing to it, the system writes zeroes to any portions of the file your code did not touch. When reading from a file, the system writes zeroes to new areas only when your code attempts to read from that area or when it closes the file. This delayed-write behavior avoids redundant I/O operations to the same area of a file.

If you notice a delay when closing your files, it is likely because of this zero-fill behavior. Make sure you do the following when working with files:
如果您在关闭文件时发现延迟,可能是因为这种零填充行为。 使用文件时,请确保执行以下操作:

Note: Whereas the HFS Plus file system implements zero-fill behavior, APFS solves the zero-filling problem for you by supporting sparse files. In APFS, empty parts of a file that span one or more blocks are not physically stored, making it unnecessary to zero-fill entire blocks on disk.

Use Modern File System Interfaces

Choose routines that let you specify paths using NSURL objects over those that specify paths that use strings. Most URL-based routines are supported in macOS 10.6 and later, and are designed to take advantage of technologies like Grand Central Dispatch. This gives your code an immediate advantage on multicore computers while not requiring you to do much work.

Prefer routines that accept block objects over those that accept callback functions or methods. Blocks are a convenient and more efficient way to implement callback-type behaviors. Blocks often require much less code to implement because they don’t require you to define and manage a context data structure for passing data. Some routines might also execute your block by scheduling it in a GCD queue, which can also improve performance.

上一篇下一篇

猜你喜欢

热点阅读