[GH-ISSUE #414] cannot partial update using ranges beyond the current file size #225

Closed
opened 2026-04-08 16:51:16 +03:00 by zhus · 7 comments
Owner

Originally created by @fyears on GitHub (Jul 13, 2024).
Original GitHub issue: https://github.com/sigoden/dufs/issues/414

Problem

In this pr dufs supports partial upload from sabre style. But it seems that currently dufs cannot accept ranges beyond the the current file size.

Steps to reproduce:

  1. Create a 0-byte file using put. Everything is ok.
  2. Append a few bytes to that file, using patch with X-Update-Range: append. Everything is ok, too.
  3. Append a few bytes to that file, using patch with X-Update-Range: bytes=3-6, where the start offset is larger than the current uploaded file size. The server returns an error 400.

I think the problem may be because that dufs doesn't implement this rule from sabre/dav:

If the start-byte is beyond the file's current length, the space in between will be filled with NULL bytes (0x00).

Configuration

Nothing special

dufs -A --enable-cors --bind 0.0.0.0 --port 8080

Log

2024-07-14T00:01:48+08:00 INFO - 192.168.31.198 "PATCH /xxx.pdf" 400

Environment:

Originally created by @fyears on GitHub (Jul 13, 2024). Original GitHub issue: https://github.com/sigoden/dufs/issues/414 **Problem** In [this pr](https://github.com/sigoden/dufs/pull/343) dufs supports partial upload from sabre style. But it seems that currently dufs cannot accept ranges **beyond** the the current file size. Steps to reproduce: 1. Create a 0-byte file using put. Everything is ok. 2. Append a few bytes to that file, using patch with `X-Update-Range: append`. Everything is ok, too. 3. Append a few bytes to that file, using patch with `X-Update-Range: bytes=3-6`, **where the start offset is larger than the current uploaded file size**. The server returns an error 400. I think the problem may be because that dufs doesn't implement this rule from [sabre/dav](https://sabre.io/dav/http-patch/): > If the start-byte is beyond the file's current length, the space in between will be filled with NULL bytes (0x00). **Configuration** Nothing special ``` dufs -A --enable-cors --bind 0.0.0.0 --port 8080 ``` **Log** ``` 2024-07-14T00:01:48+08:00 INFO - 192.168.31.198 "PATCH /xxx.pdf" 400 ``` **Environment:** - Dufs version:0.41.0 - Browser/Webdav info: I use the js webdav client library https://github.com/perry-mitchell/webdav-client - OS info: Windows - Proxy server: No
zhus closed this issue 2026-04-08 16:51:16 +03:00
Author
Owner

@sigoden commented on GitHub (Jul 13, 2024):

Can you explain why you need this?

<!-- gh-comment-id:2226982315 --> @sigoden commented on GitHub (Jul 13, 2024): Can you explain why you need this?
Author
Owner

@fyears commented on GitHub (Jul 13, 2024):

For example, I would like to upload a file by chunks in parallel.

If only append is supported, I can only upload the chunks one by one in sequences.

If ranges beyond file size is supported, I can: create 0 byes file using put, then upload last byte (or chunk) using range (so that the whole file is created on the server), then upload the remaining chunks in parallel.

<!-- gh-comment-id:2226994233 --> @fyears commented on GitHub (Jul 13, 2024): For example, I would like to upload a file by chunks in parallel. If only `append` is supported, I can only upload the chunks one by one in sequences. If ranges beyond file size is supported, I can: create 0 byes file using put, then upload **last byte (or chunk)** using range (so that the whole file is created on the server), then upload the remaining chunks in parallel.
Author
Owner

@sigoden commented on GitHub (Jul 14, 2024):

We will not support partial update using ranges beyond the current file size. Here is the reason:

  1. Dufs is stateless, it cannot implement parallel writing to the same file, Therefore, it cannot implement uploading a file by chunks in parallel.

  2. Too dangerous. A malicious large range value (1000000000000000-1000000000000001) could exhaust your disk resources. Normally, the malicious user need to upload this much data, but now only need 1 byte.

<!-- gh-comment-id:2227114349 --> @sigoden commented on GitHub (Jul 14, 2024): We will not support partial update using ranges beyond the current file size. Here is the reason: 1. Dufs is stateless, it cannot implement parallel writing to the same file, Therefore, it cannot implement uploading a file by chunks in parallel. 2. Too dangerous. A malicious large range value (1000000000000000-1000000000000001) could exhaust your disk resources. Normally, the malicious user need to upload this much data, but now only need 1 byte.
Author
Owner

@fyears commented on GitHub (Jul 14, 2024):

OK. But here is another related point:

dufs returns the header sabredav-partialupdate but actually doesn't fully support everything in https://sabre.io/dav/http-patch/ . So it may confuse the automated detection of some program like mine.

Is there any other special header indicating the server is dufs so that my program and potentially other program can differentiate dufs from other normal sabre/webdav servers? For example, something like X-DUFS: true is sufficient.

<!-- gh-comment-id:2227144885 --> @fyears commented on GitHub (Jul 14, 2024): OK. But here is another related point: dufs returns the header `sabredav-partialupdate` but actually doesn't fully support everything in https://sabre.io/dav/http-patch/ . So it may confuse the automated detection of some program like mine. Is there any other special header indicating the server is dufs so that my program and potentially other program can differentiate dufs from other normal sabre/webdav servers? For example, something like `X-DUFS: true` is sufficient.
Author
Owner

@sigoden commented on GitHub (Jul 14, 2024):

Thanks for your reminder.

<!-- gh-comment-id:2227149873 --> @sigoden commented on GitHub (Jul 14, 2024): Thanks for your reminder.
Author
Owner

@fyears commented on GitHub (Jul 27, 2024):

Ok. May I ask when will the new version be released?

<!-- gh-comment-id:2253751782 --> @fyears commented on GitHub (Jul 27, 2024): Ok. May I ask when will the new version be released?
Author
Owner

@sigoden commented on GitHub (Jul 27, 2024):

A new version is generally released every 3 months unless there are significant updates. So the next update will have to wait until the end of August.

<!-- gh-comment-id:2254020409 --> @sigoden commented on GitHub (Jul 27, 2024): A new version is generally released every 3 months unless there are significant updates. So the next update will have to wait until the end of August.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: sigoden/dufs#225