[go: nahoru, domu]

ImportantFileWriter: Fix races part 2.

This CL aims to improve the success rate of ImportantFileWriter and
reduce the chances of it leaking tmp files in case of failure. This is
accomplished by the following:

- IFW uses a newly introduced base::CreateAndOpenTemporaryFileInDir,
  which returns a base::File that is opened for exclusive r/w/d use.
  This ensures that other processes cannot open the file and interfere
  with Chrome's operations on it.
- In case of error writing to or flushing the tmp file, the file is
  deleted on Windows via DeleteOnClose. This, combined with the change
  above, should reduce the chances of a temp file being left behind. The
  primary reason for such has been other procs (such as A/V scanners)
  mapping the file into their address space, thereby blocking
  deletes. Now that Chrome has exclusive access to the file, this should
  no longer be possible except for the case of a ReplaceFile failure.
- Writes are now performed in chunks of 8MB to avoid exhausting kernel
  address space. This should reduce write errors stemming from lack of
  resources (#2 cause of failures after full disks).
- The window between Close and ReplaceFile is now as small as possible
  to increase the chances that the latter operation takes place without
  interference. ReplaceFile requires opening the tmp file with exclusive
  access, so we must hope that we win the race against other software.

This CL also makes the following changes:

- CreateAndOpenTemporaryStream now returns a stream backed by a file
  opened for exclusive access on Windows.
- FILEToFile is introduced. It returns a base::File handle to the file
  backing a file stream (without sacrificing the stream).
- ReadStreamToString{,WithMaxSize} are introduced. They are much like
  ReadFileToString{,WithMaxSize}, although they operate on an
  already-opened stream. They seek back to its start (if supported by
  the stream) before reading.
- The test launcher uses DeleteOnClose to ensure that child procs'
  output files are cleaned up.
- The "ImportantFile." histograms FileCreateError, FileDeleteError,
  FileRenameError, FileWriteError, and TempFileFailures have had their
  expiration pushed back so that they may be used to evaluate this
  change.
- The ImportantFile.FileOpenError metric ismarked obsolete, as the temp
  file is no longer opened in an independent operation from its
  creation.
- The metric ImportantFile.DeleteOnCloseError is introduced to record
  failures to delete the temp file before it has been closed (this is
  never expected to fail).

Metrics will inform us whether or not we need to add retries for either
the delete or replace operations.

BUG: 1075917
Change-Id: Iced2c8044f6e97287c2dc193cd64144579fc25fe
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2173060
Commit-Queue: Greg Thompson <grt@chromium.org>
Reviewed-by: Robert Kaplow <rkaplow@chromium.org>
Reviewed-by: Gabriel Charette <gab@chromium.org>
Cr-Commit-Position: refs/heads/master@{#764936}
diff --git a/base/files/file_util.cc b/base/files/file_util.cc
index 430321b..ebc8d9a 100644
--- a/base/files/file_util.cc
+++ b/base/files/file_util.cc
@@ -16,6 +16,7 @@
 #include "base/check_op.h"
 #include "base/files/file_enumerator.h"
 #include "base/files/file_path.h"
+#include "base/posix/eintr_wrapper.h"
 #include "base/strings/string_piece.h"
 #include "base/strings/string_util.h"
 #include "base/strings/stringprintf.h"
@@ -125,30 +126,48 @@
 }
 #endif  // !defined(OS_NACL_NONSFI)
 
-bool ReadFileToStringWithMaxSize(const FilePath& path,
-                                 std::string* contents,
-                                 size_t max_size) {
+bool ReadStreamToString(FILE* stream, std::string* contents) {
+  return ReadStreamToStringWithMaxSize(
+      stream, std::numeric_limits<size_t>::max(), contents);
+}
+
+bool ReadStreamToStringWithMaxSize(FILE* stream,
+                                   size_t max_size,
+                                   std::string* contents) {
   if (contents)
     contents->clear();
-  if (path.ReferencesParent())
-    return false;
-  FILE* file = OpenFile(path, "rb");
-  if (!file) {
-    return false;
-  }
 
-  // Many files supplied in |path| have incorrect size (proc files etc).
-  // Hence, the file is read sequentially as opposed to a one-shot read, using
-  // file size as a hint for chunk size if available.
+  // Seeking to the beginning is best-effort -- it is expected to fail for
+  // certain non-file stream (e.g., pipes).
+  HANDLE_EINTR(fseek(stream, 0, SEEK_SET));
+
+  // Many files have incorrect size (proc files etc). Hence, the file is read
+  // sequentially as opposed to a one-shot read, using file size as a hint for
+  // chunk size if available.
   constexpr int64_t kDefaultChunkSize = 1 << 16;
-  int64_t chunk_size;
+  int64_t chunk_size = kDefaultChunkSize - 1;
 #if !defined(OS_NACL_NONSFI)
-  if (!GetFileSize(path, &chunk_size) || chunk_size <= 0)
-    chunk_size = kDefaultChunkSize - 1;
+  ScopedBlockingCall scoped_blocking_call(FROM_HERE, BlockingType::MAY_BLOCK);
+#if defined(OS_WIN)
+  BY_HANDLE_FILE_INFORMATION file_info = {};
+  if (::GetFileInformationByHandle(
+          reinterpret_cast<HANDLE>(_get_osfhandle(_fileno(stream))),
+          &file_info)) {
+    LARGE_INTEGER size;
+    size.HighPart = file_info.nFileSizeHigh;
+    size.LowPart = file_info.nFileSizeLow;
+    if (size.QuadPart > 0)
+      chunk_size = size.QuadPart;
+  }
+#else   // defined(OS_WIN)
+  stat_wrapper_t file_info = {};
+  if (!File::Fstat(fileno(stream), &file_info) && file_info.st_size > 0)
+    chunk_size = file_info.st_size;
+#endif  // defined(OS_WIN)
   // We need to attempt to read at EOF for feof flag to be set so here we
   // use |chunk_size| + 1.
   chunk_size = std::min<uint64_t>(chunk_size, max_size) + 1;
-#else
+#else   // !defined(OS_NACL_NONSFI)
   chunk_size = kDefaultChunkSize;
 #endif  // !defined(OS_NACL_NONSFI)
   size_t bytes_read_this_pass;
@@ -157,9 +176,8 @@
   std::string local_contents;
   local_contents.resize(chunk_size);
 
-  ScopedBlockingCall scoped_blocking_call(FROM_HERE, BlockingType::MAY_BLOCK);
   while ((bytes_read_this_pass = fread(&local_contents[bytes_read_so_far], 1,
-                                       chunk_size, file)) > 0) {
+                                       chunk_size, stream)) > 0) {
     if ((max_size - bytes_read_so_far) < bytes_read_this_pass) {
       // Read more than max_size bytes, bail out.
       bytes_read_so_far = max_size;
@@ -174,12 +192,11 @@
     bytes_read_so_far += bytes_read_this_pass;
     // Last fread syscall (after EOF) can be avoided via feof, which is just a
     // flag check.
-    if (feof(file))
+    if (feof(stream))
       break;
     local_contents.resize(bytes_read_so_far + chunk_size);
   }
-  read_status = read_status && !ferror(file);
-  CloseFile(file);
+  read_status = read_status && !ferror(stream);
   if (contents) {
     contents->swap(local_contents);
     contents->resize(bytes_read_so_far);
@@ -193,6 +210,19 @@
                                      std::numeric_limits<size_t>::max());
 }
 
+bool ReadFileToStringWithMaxSize(const FilePath& path,
+                                 std::string* contents,
+                                 size_t max_size) {
+  if (contents)
+    contents->clear();
+  if (path.ReferencesParent())
+    return false;
+  ScopedFILE file_stream(OpenFile(path, "rb"));
+  if (!file_stream)
+    return false;
+  return ReadStreamToStringWithMaxSize(file_stream.get(), max_size, contents);
+}
+
 #if !defined(OS_NACL_NONSFI)
 bool IsDirectoryEmpty(const FilePath& dir_path) {
   FileEnumerator files(dir_path, false,
@@ -202,6 +232,11 @@
   return false;
 }
 
+bool CreateTemporaryFile(FilePath* path) {
+  FilePath temp_dir;
+  return GetTempDir(&temp_dir) && CreateTemporaryFileInDir(temp_dir, path);
+}
+
 ScopedFILE CreateAndOpenTemporaryStream(FilePath* path) {
   FilePath directory;
   if (!GetTempDir(&directory))