泰晓科技 -- 聚焦 Linux - 追本溯源,见微知著!
网站地址:https://tinylab.org

泰晓RISC-V实验箱,转战RISC-V,开箱即用
请稍侯

2019 LSFMM 大会专题报导


2019 Linux 存储/文件系统/内存管理 峰会专题报导
Wu Zhangjin 创作于 2019/06/12

By Falcon of TinyLab.org Jun 12, 2019

今年的 LSFMM 大会 (Linux 存储/文件系统/内存管理峰会) 于 4 月 30 日 ~ 5 月 2 日在 San Juan, Puerto Rico 举办。LWN 最近两期 Weekly 对其进行了系列的介绍,本文对相关内容做了简要梳理,现汇总如下。

如果您对解读部分有疑问,欢迎阅读 LWN 原文,也欢迎翻到文末扫码加笔者微信进一步讨论。

May 30, LWN Weekly: LSFMM part

There have been some concerns expressed that the stable kernel is growing too much, by adding too many patches, which makes it less stable.

关键词:Stable tree, Testing, xfstests, blktests, kernelci

Levin 一直致力于让 Linux 内核的 Stable tree 更加的 Stable,他的两个关注方向是:”that fewer regressions are released” 和 “that all of the fixes get out there for users”。

第二点在 2018 年北美开源峰会上,Levin 介绍了 Machine learning and stable kernels,旨在自动的发现那些没有被人主动提交给 Stable release 的 patch,确保有更多的 fixes 能够同步到 Stable tree。

第一点则是如何确保这些 fixes 在人工 Review 之后有更充分的测试,这个 Topic 主要是聚焦这个问题。大家讨论到了用于 Filesystem 的 xfstests,用于 Storage 的 blktests,用于 Memory 的 MMTests。另外,大家也讨论到用哪些自动化的测试框架来跑这些测试,作者希望是 KernelCI,有听众提到为什么不是 O-Day automated testing,作者回复 O-Day 是 Semi Open 的不如完全开源的 KernelCI。

后面大家有讨论一个 include/exclude list 的问题,主要原因是这些测试用例跑的时间可能很长或者本身并不能在任意内核版本任意配置选项上跑,所以需要维护一个子集,还要排除一些可能导致 false-positive 的特定 Test Case,LTP 是一个很典型的例子,它有数千个测试用例,但是不能照搬直接放到产品测试中,因为它本身会带来大量的误报(发现的Bug并不影响最终产品,但是可能需要耗费大量时间解决),所以需要设立 include/exclude list。

Levin 最后呼吁大家把好的 Test Case 贡献到 KernelCI,以便 Stable tree 可以跑更多更好的测试,从而让 Stable tree really more stable。

He has also recently been running blktests to track down a problem that manifested itself as an ext4 regression in xfstests. It turned out to be a problem in the SCSI multiqueue (mq) code, but he thought it would be nice to be able to pinpoint whether future problems were block layer problems or in ext4. So he has been integrating blktests into his test suite.

关键词:blktests

来自 Google 的 Ted Ts 同学最近在玩 NFS testing 和 blktests,NFS testing 主要用 xfstests

他用 blktests 找到了一个 ext4 的衰退,当然这个问题表现在 ext4,但是实际问题可能出在 SCSI multiqueue,block layer。他已经把这个用例集成进了自己的自动化测试平台,并且发现 blktests 还有待完善,比如说要跑所有测试的话,得打开 38 个内核模块,但是 blktests 并没有明确对此说明。他正准备把相关的 setup 工作贡献进 blktests,并且建议更多的 kernel developer 来跑 blktests,“it makes his life easier”。

Application developers hate the fact that when they update files in place, a crash can leave them with old or new data—or sometimes a combination of both. He discussed some implementation ideas that he has for atomic writes for XFS and wanted to see what the other filesystem developers thought about it.

There are filesystems that can write out-of-place, such as XFS, Btrfs, and others, so it would be nice to allow for atomic writes at the filesystem layer.

In that system, users can write as much data as they want to a file, but nothing will be visible until they do an explicit commit operation. Once that commit is done, all of the changes become active. One simple way to implement this would be to handle the commit operation as part of fsync(), which means that no new system call is required.

关键词:Atomic write, fsync, O_ATOMIC

应用开发人员经常抱怨,在更新完一个文件以后,某个 Crash 会导致写入磁盘的数据不如预期的那样会写入最新的那份。

目前系统提供了 sync(), fsync(), fdatasync() 等接口用于做数据同步,但是只有 fsync() 是确保某个单一的文件写到磁盘,如果指定错了文件描述符呢?结果可能会出乎意料 —— 系统 crash 后数据丢了。sync() 只是确保数据排入写队列,而 fdatasync() 不保证 metadata 写入,而 fflush() 只保证用户空间的流缓冲区被刷新。另外一个是在 open() 的时候指定 O_SYNC,确保数据被立即写入磁盘,立即写入磁盘的动作则会让低速的 IO 拖慢 CPU。

而底层的设备暂时都没有实现这样的 atomic writing 或者现有的接口很 awkward,所以从文件系统层面实现这样的接口是很有必要的。五年以前,HP Research 有一篇 论文 介绍到如何在 open() 接口中增加一个 flag 来实现相关功能,在这样一个系统中,用户可以随便写,直到最后做一个 commit 操作,相关的改动才会实际发生。而这个 commit 操作可以复用 fsync() 来实现。Hellwig 基于这个理论写了一些 patch,他主要是给 open() 加了个 O_ATOMIC flag,不过目前还遗留了一些问题:

It adds a new O_ATOMIC flag for open, which requests writes to be failure-atomic, that is either the whole write makes it to persistent storage, or none of it, even in case of power of other failures.

大家讨论很激烈,几个文件系统都表示很感兴趣,期待!

Much of the development work that has gone on in the Linux filesystem world over the last few years has been related to the performance of copying files, at least indirectly, he said. There are still pain points around copy operations, however, so he would like to see those get addressed.

关键词:cp, copy_file_range, ACL, xattrs, page cache, io size, fiemap

French 抛出这样一个问题,是发现,虽然随着硬件性能的提升(NVMe/UFS), cp 性能有了很大的提升,但是 Linux 下的 cp 性能表现还是有点”难堪“,软件上尤其是内核上能够配合软件做哪些工作才能有所改善呢?他做了一些数据测试:

On the fast drive, for a 2GB copy on ext4, cp took 1.2s (1.7s on the slow), scp 1.7s (8.4s), and rsync took 4.3s (not run on the slow drive, apparently). These represent “a dramatic difference in performance” for a “really stupid” copy operation

他分析后发现可能是 cp 用了 128K I/O size,而其他的都用了 16k。其他工具,比如 parcp, parallel, fpart and fpsync, mutil 则采用了并行的优化方法, 那么文件系统能做哪些工作呢?

  1. copy_file_range(): 比如说目前内核中 5 个左右的文件系统提供了支持,但是 Btrfs 目前却不支持。这个函数旨在在内核中完成两个文件之间的拷贝,避免数据从用户空间和内存空间来回搬动。

  2. ACL/xattrs:目前没有一个 API 可以完整的复制整个文件以及其 meta data,所以是要提供一个用户空间的 library 还是说内核可以做点什么呢,尤其是跟 security 相关的部分。

  3. I/O size: 这个参数依赖于硬件,但是目前用户空间从 stat() 拿到的是 st_blksize 只是文件系统的 block size,并不是设备支持的最佳 I/O size,所以是否有必要增加一个参数到 statx()?不过对于挂了不同设备的 RAID 而言,还得 RAID controller 跟设备去打交道,从而获得这个数据。

  4. page cache: 是不是可以提供一个方式,让用户空间关闭 page cache。对于马上要访问的数据而言,page cache 是有价值的,比如说拷贝了一份内核,马上要编译,但是拷贝完如果不用,这个 page cache 就是没有必要走的路径,关掉会提升不少性能。

  5. fiemap: fiemap 用于取代 block by block mapping,在连续 block 之上构建一个 extent,从而减少元数据的开销,即 bitmap 的开销,文件系统支持这样的特性也可以提升性能。

French said that Linux copy performance was a bit embarrassing; OS/2 was probably better for copy performance in some ways. But he did note that the way sparse files were handled using FIEMAP in cp was great. Ts’o pointed out that FIEMAP is a great example of how the process should work. Someone identified a problem, so kernel developers added a new feature to help fix it, and now that code is in cp; that is what should be happening with any other kernel features needed for copy operations.

French 发起的这个话题引人深思,内核以及 Kernel Developer 永远不是独立的,完整交付给用户的 Linux System 还包含了其他工具,如果最后确实有体验上的问题,Kernel Developer 不能把问题简单抛给 Application developer,去做并行就好了嘛?!所以,French 的工作态度很值得赞赏!

French would like to see the filesystem developers participate in developing the tools or at least advising those developers.

不同社区共同协作,成立工作组,甚至在这样的峰会上主动邀请相关的 Application developer 参加或许是更好的方向。感谢社区中那些积极思考和探索 “really stupid” 的 “cp problem”。

Control groups are managed using a virtual filesystem; a particular group can be deleted just by removing the directory that represents it. But the truth of the matter, Gushchin said, is that while removing a control group’s directory hides the group from user space, that group continues to exist in the kernel until all references to it go away. While it persists, it continues to consume resources.

Specifically, for control groups that share the same underlying filesystem, the shrinkers are not able to reclaim memory from the VFS caches after a control group dies, at least under slight to moderate memory pressure.

关键词:Control Group, memory shrinker

Control Group 被设计用来更好地管理资源,但是它自己也面临了资源回收的问题。每个控制组在不用的时候可以从用户空间把相应的目录删掉,但是实际上,内核空间还会保留相应的资源,直到相关的资源引用计数都变为 0,而相应的资源回收需要触发 memory shrinker,但是 memory shrinker 通常有一定的条件,比如要申请的内存超过了某一个门限,如果系统还有足够的内存,那么 memory shrinker 就不会触发,这会导致会有大量这样的 Control Group 累积,虽然每个只占用了 200KB 的空间,但是 Gushchin 发现 “I’ve even seen a host with more than 100Gb of memory wasted for dying cgroups”。

Gushchin 早期的方案是制造一个 memory shrinker 的 pressure,触发系统回收内存,但是引起了性能衰退,被 Revert 了。目前他提出了一个方案,就是把被删除的 Control Group 的 pages 挂到它的父 Control Group 上,这样它自己就可以从容离开。从 最新 Patch 提供的数据来看,确实解决了问题,期望能够很快在内核主线上看到,这个对于云服务主机可能会很有帮助。

May 23, LWN Weekly: LSFMM part

A new version of the UFS specification is being written and turbo-write is expected to be part of it. The idea behind turbo-write is to use an SLC buffer to provide faster writes, with the contents being shifted to the slower TLC as needed.

关键词:UFS, turbo-write, SLC, TLC

关注手机的用户可能能深刻体会到,这几年发展下来,手机是越来越快,这得益于手机的处理器频率的提升(更好设计和制作工艺),当然也得益于更快的存储技术,而 UFS 就是这里头的一个大功臣。早期 eMMC 通过并行不断提升读写速度,但是 UFS 通过更高速的串行双工通信解决了 eMMC 发展遇到的瓶颈,速度得到了数倍的提升。

当然,大家可能还有一个深刻的感受就是随着时间的推移,手机越来越慢,原因是另外一个背景,也就是在 eMMC 和 UFS 这两个通信标准的背后,都是存储芯片,而 Flash 存储芯片也经历了三个重要的发展阶段,即 SLC, MLC 和 TLC,演进是更大的存储空间,更低的成本,但是却迎来了更低的擦写寿命(写到一定程度存在漏电)和更慢的读写速度。这个演进过程是通过引入不同 Level 的电荷增加单个 CELL 能表达的位数,随着位数的增多,SLC/single-level-cell (1bit/cell), MLC/Multi-level-cell (2bits/cell), TLC/Triple-level-cell (3bits/cell),所以操作复杂度和寿命下降就比较容易理解,MLC 和 TLC 的寿命分别只有 SLC 的 1/10 和 1/20,TLC 单个存储单元的擦写次数只有 500 次左右(相当于一个 64G 的 TLC 芯片可以写入 32000 G 新内容,也没有想象的少哈),而 MLC 在 1 万次左右,SLC 则能到 10 万次。但是由于成本大幅下降,所以市面上现在 TLC, MLC 比较多,或者有跟 SLC 混合的产品,有的甚至能够先当 SLC 用,等到磁盘写到 1/3 以上再切换回 TLC。

这里的 “turbo mode” 就是用来管理 SLC 和 TLC/MLC 混合的产品,这类产品用 SLC 当 buffer,”turbo mode” 用于直接写入 SLC,但是 SLC 容量比较有限,一直开着这个模式会写满,很快就慢了,所以,一方面要管理什么情况下做 “turbo mode”,另外一个是什么时候在后台把内容从 SLC 搬到 TLC,确保需要的时候,有足够的 SLC 容量保障 “turbo mode” 可以工作。还有一个是也不能一有数据就往 TLC 搬,要充分利用 SLC 的寿命。

所以 UFS driver 的工作要兼顾上面的需求:”both the turbo-write governance and the evacuation policy should be handled by the UFS driver”。

最后,其实我们可以提前思考一个问题,如果这样一种 “turbo mode” 开放给内核,那手机厂商能够基于这个 mode 做点什么工作呢?除了 Benchmarking,还能做什么?哪些场景需要这个突发的瞬时的读写性能呢?

zoned block devices have multiple zones with different characteristics; usually there are zones that can only be written in sequential order as well as conventional zones that can be written in random order. The genesis of zoned block devices is shingled magnetic recording (SMR) devices, which were created to increase the capacity of hard disks, but at the cost of some flexibility.

关键词:SMR, F2FS, ZoneFS, Btrfs

SMR 存储通过缩小磁盘间距离甚至允许磁轨重叠从而提升存储密度,但是会牺牲写入操作的灵活性。这里讨论了如果一块磁盘,一部分用了 SMR(仅支持顺序写),一部分是传统的磁盘,需要什么样的文件系统。目前社区正在添加支持的文件系统有 F2FSBtrfs,还有一个是全新开发的 ZoneFS。

ZoneFS is a new filesystem that exposes zoned block devices to users in the simplest possible way, Le Moal said. It exports each zone as a file under the mountpoint in two directories: /conventional for random-access zones or /sequential for sequential-only zones. Under those directories, the zones will be files that use the zone number as the file name.

三者的状态都不完善,还在不断开发中。

Currently, there are applications that create and populate a temporary file, set the attributes desired, then rename it, Goldstein said. The developers think that the file is written persistently to the filesystem, which is not true, but mostly works. The official answer is that you must use fsync(), but it is a “terrible answer” because it has impacts all over the system.

关键词:fsync, overlayfs, rename, xattrs, fiemap, crashmonkey

又在讨论 Application Developers 渴望确保数据真正回写到磁盘的需求,只是讨论到最后也没有结论。前面的讨论 原子写的一种实现方式 反而有一些实质的成果,见上面 May 30 部分的解读。

倒是有提到 Overlayfs 实现了类似的 Feature,通过设置 xattrs,然后再 rename 文件的话就能确保 rename 完成之后,metadata 被持久写入。还有一个是在 XFS 和 ext4,可以通过 FIEMAP ioctl + FIEMAP_FLAG_MAP 能实现类似的功能。

There are two types of barriers that he is talking about. The first would be used by overlayfs; it sets extended attributes (xattrs) on a file, then renames it. Overlayfs expects that, if the rename is observed, then the metadata has been written persistently. The other barrier is for data to be persistently written, which can be done today using the FIEMAP ioctl() command (with the FIEMAP_FLAG_SYNC flag), at least for XFS and ext4, he asserted.

实际上有其他人补充,即使是 XFS 和 ext4,FIEMAP 在 Sparse files 上也不能保证原子写入。

有篇来自 CrashMonkey 的文档 Documenting the crash-recovery guarantees of Linux file systems 详细介绍了 POSIX, xfs, btrfs, ext4, F2FS 对 fsync(file) 和 fsync(dir) 的要求和实现,Application Developers 可以先详细看看。

The idea of an asynchronous version of fsync() is kind of counter-intuitive, Wheeler said. But there are use cases in large-scale data migration. If you are copying a batch of thousands of files from one server to another, you need a way to know that those files are now durable, but don’t need to know that they were done in any particular order. You could find out that the data had arrived before destroying the source copies.

关键词:fsync, io_uring

Wheeler 抛出来一个这样的主题是希望有这么一个 API,能够批处理一批文件,但是不要依赖特别的 order。有人建议用 io_uring

The io_uring interface allows arbitrary operations to be done in a kernel worker thread and, when they complete, notifies user space.

但是 Wheeler 看上去并不是希望这样,所以 Ts’o 提出了 fsync2():

fsync2() that takes an array of file descriptors and returns when they have all been synced. If the filesystem has support for fsync2(), it can do batching on the operations. It would be easier for application developers to call a function with an array of file descriptors rather than jumping through the hoops needed to set up an io_uring, he said.

与其在内核这么复杂的操作,那在用户态实现一个基于 io_uring 的 library 不也可以吗?!

Amir Goldstein has a use case for a feature that could be called a “lazy file reflink”, he said, though it might also be described as “VFS-level snapshots”.

关键词:VFS, snapshot, inotify, fanotify

Goldstein 在两年前就演示了 overlayfs snapshots,思路是指定一个子目录并创建它的快照,所以,在该目录结构下任何文件的变更都可以用写时复制的方式处理。它在 VFS 层实现,所以不用关心实际用的文件系统类型。

实现主要基于 filesystem change journalinotify / fanotify相应的源码都已经是公开的,感兴趣的同学可以参考。

THPs reduce translation lookaside buffer (TLB) misses so they provide better performance. Facebook is trying to reduce misses on the TLB for instructions by putting hot functions into huge pages. It is using the Binary Optimization and Layout Tool (BOLT) to profile its code in order to identify the hot functions.

This results in a 5-10% performance boost without requiring any kernel changes to support it.

关键词:THP, TLB miss, performance boost, BOLT.

“透明巨页(Transparent Huge Pages, 简称 THP)” 通过降低 TLB (Translation Lookaside Buffer)丢失的概率,可以极大地提升性能。在此基础上,Facebook 提出可以将频繁调用的函数指令存放在 huge pages 上从而利用该特性提升程序运行的效率。 具体做法是使用 Binary Optimization and Layout Tool (BOLT) 识别出程序中频繁调用的函数然后将它们收集起来放在可执行文件的一个特殊的节中(称为 hot section,大小为 8MB)。在运行时,应用程序会创建一个 8MB 大小的临时缓存,然后先将 hot section 的内容临时拷贝到这块内存区域。再为该内存区域创建一个对应的 huge page(通过匿名内存映射 mmap() + madvise() 的方式),huge page 创建好后,再将备份在缓存中的内容拷贝到该 huge page 中,最后再调用 mprotect() 将这块内存的权限设置为可执行(executable)。

采用以上的方式,可以在不涉及修改 kernel 的前提下,获得 5~10% 的性能提升。但问题是这么一来该 THP 区域的符号地址以及 uprobe target 就乱掉了,因为 kernel 并不知道这块区域是程序代码段的一部分。假如在文件系统代码里面直接支持 THP,那就不用搞这么多复杂的小动作,直接用一个 madvise() 命令就可以达成以上目标。

来自 facebook 的 Song Liu 目前正在致力于内核中这方面的改进,不过他目前的实现还比较简单,存在诸多限制。Song Liu 在 2019 年 的 LSFMM 大会上就这个问题向大家广泛征询意见,很多专业人士对此发表了各自的看法。



Read Related:

Read Latest: