Examples of hflush()


Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
      FileSystem fs = cluster.getFileSystem();
      // Creating a file with 4096 blockSize to write multiple blocks
      stream = fs.create(FILE_PATH, true, BLOCK_SIZE, (short) 1, BLOCK_SIZE);
      stream.write(DATA_BEFORE_RESTART);
      stream.hflush();
     
      // Wait for all of the blocks to get through
      while (len < BLOCK_SIZE * (NUM_BLOCKS - 1)) {
        FileStatus status = fs.getFileStatus(FILE_PATH);
        len = status.getLen();
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      NameNode.getAddress(conf).getPort();
      // Creating a file with 4096 blockSize to write multiple blocks
      stream = fs.create(FILE_PATH, true, BLOCK_SIZE, (short) 1, BLOCK_SIZE);
      stream.write(DATA_BEFORE_RESTART);
      stream.write((byte)1);
      stream.hflush();
     
      // explicitly do NOT close the file before restarting the NN.
      cluster.restartNameNode();
     
      // this will fail if the final block of the file is prematurely COMPLETEd
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      // explicitly do NOT close the file before restarting the NN.
      cluster.restartNameNode();
     
      // this will fail if the final block of the file is prematurely COMPLETEd
      stream.write((byte)2);
      stream.hflush();
      stream.close();
     
      assertEquals(DATA_BEFORE_RESTART.length + 2,
          fs.getFileStatus(FILE_PATH).getLen());
     
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file1 = new Path(dir1, "file1");
      FSDataOutputStream stm1 = TestFileCreation.createFile(fs, file1, 1);
      System.out.println("testFileCreationDeleteParent: "
          + "Created file " + file1);
      TestFileCreation.writeFile(stm1);
      stm1.hflush();

      // create file2.
      Path dir2 = new Path("/user/dir2");
      Path file2 = new Path(dir2, "file2");
      FSDataOutputStream stm2 = TestFileCreation.createFile(fs, file2, 1);
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file2 = new Path(dir2, "file2");
      FSDataOutputStream stm2 = TestFileCreation.createFile(fs, file2, 1);
      System.out.println("testFileCreationDeleteParent: "
          + "Created file " + file2);
      TestFileCreation.writeFile(stm2);
      stm2.hflush();

      // move dir1 while file1 is open
      Path dir3 = new Path("/user/dir3");
      fs.mkdirs(dir3);
      fs.rename(dir1, dir3);
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file3 = new Path(dir3, "file3");
      FSDataOutputStream stm3 = fs.create(file3);
      fs.rename(file3, new Path(dir3, "bozo"));
      // Get a new block for the file.
      TestFileCreation.writeFile(stm3, TestFileCreation.blockSize + 1);
      stm3.hflush();

      // Stop the NameNode before closing the files.
      // This will ensure that the write leases are still active and present
      // in the edit log.  Simiarly, there should be a pending ADD_BLOCK_OP
      // for file3, since we just added a block to that file.
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file1 = new Path(dir1, "file1");
      FSDataOutputStream stm1 = TestFileCreation.createFile(fs, file1, 1);
      System.out.println("testFileCreationDeleteParent: "
          + "Created file " + file1);
      TestFileCreation.writeFile(stm1);
      stm1.hflush();

      // create file2.
      Path dir2 = new Path("/user/dir2");
      Path file2 = new Path(dir2, "file2");
      FSDataOutputStream stm2 = TestFileCreation.createFile(fs, file2, 1);
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file2 = new Path(dir2, "file2");
      FSDataOutputStream stm2 = TestFileCreation.createFile(fs, file2, 1);
      System.out.println("testFileCreationDeleteParent: "
          + "Created file " + file2);
      TestFileCreation.writeFile(stm2);
      stm2.hflush();

      // move dir1 while file1 is open
      Path dir3 = new Path("/user/dir3");
      fs.rename(dir1, dir3);
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file1 = new Path(dir1, "file1");
      FSDataOutputStream stm1 = TestFileCreation.createFile(fs, file1, 1);
      System.out.println("testFileCreationDeleteParent: " +
                         "Created file " + file1);
      TestFileCreation.writeFile(stm1);
      stm1.hflush();

      Path dir2 = new Path("/user/dir2");
      fs.mkdirs(dir2);

      fs.rename(file1, dir2);
View Full Code Here

Examples of org.apache.hadoop.fs.FSDataOutputStream.hflush()

      Path file1 = new Path(dir1, "file1");
      FSDataOutputStream stm1 = TestFileCreation.createFile(fs, file1, 1);
      System.out.println("testFileCreationDeleteParent: "
          + "Created file " + file1);
      TestFileCreation.writeFile(stm1);
      stm1.hflush();

      Path dir2 = new Path("/user/dir2");

      fs.rename(file1, dir2);
View Full Code Here
TOP
Copyright © 2018 www.massapi.com. All rights reserved.
All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.