c# - Amazon S3 流太长

标签 c# amazon-web-services amazon-s3

当我尝试从 Amazon S3 下载一个大文件(大小为 2GB 的 zip 文件)时,它会抛出异常“流太长”。这就是我从 Amazon 读取此文件到流中的方式

  var s3File = new S3FileInfo(Client, BucketName, ObjectKey);
  var stream = s3File.OpenRead();

是否可以将文件内容读取成小块,然后在本地合并?

-艾伦-

最佳答案

    public class BuketObjectResult
    {
        public bool Success { get; set; }

        public long Size { get; set; }

    }


    public void GetBucketObjectData()
    {

        try
        {

            BuketObjectResult res = CheckFile();

            //CHUNK  Divide in small chunks eg : 2GB file : 20MB chunks 

            int chunkSize = (int)(res.Size / CHUNK);

            if (!res.Success && (res.Size == 0 || chunkSize <= 0))
            {
                res.Success = false;

                return ;

            }

            string fileName = "Your file name";

            long startPostion = 0;

            long endPosition = 0;

            while (startPostion >= 0)
            {
                byte[] chunk = new byte[chunkSize];

                endPosition = chunkSize + startPostion;

                if (endPosition > res.Size) //to get rest of the file
                    endPosition = res.Size;

                GetObjectRequest request = new GetObjectRequest
                {
                    BucketName = "your bucket name",
                    Key = "your key",
                    ByteRange = new ByteRange(startPostion, endPosition)
                };


                using (GetObjectResponse response = s3client.GetObject(request))
                using (Stream responseStream = response.ResponseStream)
                using (FileStream fileStream = File.Open(fileName, FileMode.Append))
                {

                    int readIndex = ReadChunk(responseStream, ref chunk);

                    startPostion += readIndex;


                    if (readIndex != 0)
                    {
                        fileStream.Write(chunk, 0, readIndex);

                    }

                    if (readIndex != chunk.Length) // We didn't read a full chunk: we're done (read only rest of the bytes)
                        break;


                }


            }

            // Verify 
            FileInfo fi = new FileInfo(fileName);

            if (fi.Length == res.Size)
            {
                res.Success = true;
            }


        }
        catch (Exception e)
        {

        }

    }

    public BuketObjectResult CheckFile()
    {
        BuketObjectResult res = new BuketObjectResult() { Success = false};

        try
        {
            ListObjectsRequest request = new ListObjectsRequest()
            {
                BucketName = "bucketName here ",
                Delimiter = "/",
                Prefix = "Location here"
            };


            ListObjectsResponse response = s3client.ListObjects(request);



            if (response.S3Objects != null && response.S3Objects.Count > 0)
            {
                S3Object o = response.S3Objects.Where(x => x.Size != 0).FirstOrDefault();

                if (o != null)
                {

                    res.Success = true;

                    res.Size = o.Size;


                }

            }


        }
        catch (Exception e)
        {

        }

        return res;
    }

关于c# - Amazon S3 流太长,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37425732/

相关文章:

java - 使用快照在JAVA AWS sdk中创建EC2实例

java - Spring Boot 健康检查 - SQS Consumer

amazon-s3 - 为什么我的网站图标出现在 Amazon S3 终端节点上而不是转发的域上?

amazon-web-services - 仅在AWS EC2实例上运行Kubernetes

c# - 是否可以将 Func<bool> 作为 while 条件

c# - 为什么我不能从我的列表中抽取 1 个以上的敌人?

c# - 动态编译代码时出现IOException

c# - 从列表框中清除选中的项目而不触发 itemcheck 事件

amazon-web-services - Elastic Beanstalk 上的 ElasticSearch

ruby-on-rails - 克隆记录并将远程文件复制到新位置?