我在 java 中构建了一个文件哈希方法,它采用 filepath+filename
的输入字符串表示形式,然后计算该文件的哈希值。散列可以是任何 native 支持的 java 散列算法,例如 MD2
到 SHA-512
。
我正在尝试找出性能的最后一滴,因为这种方法是我正在从事的项目的组成部分。我被建议尝试使用 FileChannel
而不是常规的 FileInputStream
。
我原来的方法:
/**
* Gets Hash of file.
*
* @param file String path + filename of file to get hash.
* @param hashAlgo Hash algorithm to use. <br/>
* Supported algorithms are: <br/>
* MD2, MD5 <br/>
* SHA-1 <br/>
* SHA-256, SHA-384, SHA-512
* @return String value of hash. (Variable length dependent on hash algorithm used)
* @throws IOException If file is invalid.
* @throws HashTypeException If no supported or valid hash algorithm was found.
*/
public String getHash(String file, String hashAlgo) throws IOException, HashTypeException {
StringBuffer hexString = null;
try {
MessageDigest md = MessageDigest.getInstance(validateHashType(hashAlgo));
FileInputStream fis = new FileInputStream(file);
byte[] dataBytes = new byte[1024];
int nread = 0;
while ((nread = fis.read(dataBytes)) != -1) {
md.update(dataBytes, 0, nread);
}
fis.close();
byte[] mdbytes = md.digest();
hexString = new StringBuffer();
for (int i = 0; i < mdbytes.length; i++) {
hexString.append(Integer.toHexString((0xFF & mdbytes[i])));
}
return hexString.toString();
} catch (NoSuchAlgorithmException | HashTypeException e) {
throw new HashTypeException("Unsuppored Hash Algorithm.", e);
}
}
重构方法:
/**
* Gets Hash of file.
*
* @param file String path + filename of file to get hash.
* @param hashAlgo Hash algorithm to use. <br/>
* Supported algorithms are: <br/>
* MD2, MD5 <br/>
* SHA-1 <br/>
* SHA-256, SHA-384, SHA-512
* @return String value of hash. (Variable length dependent on hash algorithm used)
* @throws IOException If file is invalid.
* @throws HashTypeException If no supported or valid hash algorithm was found.
*/
public String getHash(String fileStr, String hashAlgo) throws IOException, HasherException {
File file = new File(fileStr);
MessageDigest md = null;
FileInputStream fis = null;
FileChannel fc = null;
ByteBuffer bbf = null;
StringBuilder hexString = null;
try {
md = MessageDigest.getInstance(hashAlgo);
fis = new FileInputStream(file);
fc = fis.getChannel();
bbf = ByteBuffer.allocate(1024); // allocation in bytes
int bytes;
while ((bytes = fc.read(bbf)) != -1) {
md.update(bbf.array(), 0, bytes);
}
fc.close();
fis.close();
byte[] mdbytes = md.digest();
hexString = new StringBuilder();
for (int i = 0; i < mdbytes.length; i++) {
hexString.append(Integer.toHexString((0xFF & mdbytes[i])));
}
return hexString.toString();
} catch (NoSuchAlgorithmException e) {
throw new HasherException("Unsupported Hash Algorithm.", e);
}
}
两者都返回正确的散列,但重构方法似乎只适用于小文件。当我传入一个大文件时,它完全阻塞了,我不明白为什么。我是 NIO
的新手,所以请多多指教。
编辑:忘记提及我正在通过它进行 SHA-512 测试。
更新:
用我现在的方法更新。
/**
* Gets Hash of file.
*
* @param file String path + filename of file to get hash.
* @param hashAlgo Hash algorithm to use. <br/>
* Supported algorithms are: <br/>
* MD2, MD5 <br/>
* SHA-1 <br/>
* SHA-256, SHA-384, SHA-512
* @return String value of hash. (Variable length dependent on hash algorithm used)
* @throws IOException If file is invalid.
* @throws HashTypeException If no supported or valid hash algorithm was found.
*/
public String getHash(String fileStr, String hashAlgo) throws IOException, HasherException {
File file = new File(fileStr);
MessageDigest md = null;
FileInputStream fis = null;
FileChannel fc = null;
ByteBuffer bbf = null;
StringBuilder hexString = null;
try {
md = MessageDigest.getInstance(hashAlgo);
fis = new FileInputStream(file);
fc = fis.getChannel();
bbf = ByteBuffer.allocateDirect(8192); // allocation in bytes - 1024, 2048, 4096, 8192
int b;
b = fc.read(bbf);
while ((b != -1) && (b != 0)) {
bbf.flip();
byte[] bytes = new byte[b];
bbf.get(bytes);
md.update(bytes, 0, b);
bbf.clear();
b = fc.read(bbf);
}
fis.close();
byte[] mdbytes = md.digest();
hexString = new StringBuilder();
for (int i = 0; i < mdbytes.length; i++) {
hexString.append(Integer.toHexString((0xFF & mdbytes[i])));
}
return hexString.toString();
} catch (NoSuchAlgorithmException e) {
throw new HasherException("Unsupported Hash Algorithm.", e);
}
}
因此,我尝试使用我的原始示例和最新更新的示例对一个 2.92GB 文件的 MD5 进行哈希处理。当然,任何基准测试都是相对的,因为存在操作系统和磁盘缓存以及其他正在发生的“魔法”,它们会扭曲对相同文件的重复读取……但这是一些基准测试的镜头。我加载了每个方法,并在重新编译后将其关闭了 5 次。基准测试取自最后一次(第 5 次)运行,因为这将是该算法的“ HitTest 门”运行,以及任何“魔法”(无论如何在我的理论中)。
Here's the benchmarks so far:
Original Method - 14.987909 (s)
Latest Method - 11.236802 (s)
哈希相同的 2.92GB 文件所需的时间减少了 25.03%
。还不错。
最佳答案
3条建议:
1) 每次读取后清除缓冲区
while (fc.read(bbf) != -1) {
md.update(bbf.array(), 0, bytes);
bbf.clear();
}
2)不要同时关闭fc和fis,多余,关闭fis即可。 FileInputStream.close API 说:
If this stream has an associated channel then the channel is closed as well.
3) 如果你想使用 FileChannel 提高性能
ByteBuffer.allocateDirect(1024);
关于java - FileChannel ByteBuffer 和哈希文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16050827/