panpath.s3_client
S3 client implementation.
S3Client— Synchronous S3 client implementation using boto3.</>S3SyncFileHandle— Sync file handle for S3 with chunked streaming support.</>
panpath.s3_client.S3Client(**kwargs)
Synchronous S3 client implementation using boto3.
copy(source,target,follow_symlinks)— Copy file to target.</>copytree(source,target,follow_symlinks)— Copy directory tree to target recursively.</>delete(path)— Delete S3 object.</>exists(path)(bool) — Check if S3 object exists.</>get_metadata(path)(dict) — Get object metadata.</>glob(path,pattern)(Iterator) — Glob for files matching pattern.</>is_dir(path)(bool) — Check if S3 path is a directory (has objects with prefix).</>is_file(path)(bool) — Check if S3 path is a file.</>is_symlink(path)(bool) — Check if object is a symlink (has symlink-target metadata).</>list_dir(path)(list) — List S3 objects with prefix.</>mkdir(path,parents,exist_ok)— Create a directory marker (empty object with trailing slash).</>open(path,mode,encoding,**kwargs)(Any) — Open S3 object for reading/writing with streaming support.</>read_bytes(path)(bytes) — Read S3 object as bytes.</>read_text(path,encoding)(str) — Read file as text.</>readlink(path)(str) — Read symlink target from metadata.</>rename(source,target)— Rename/move file.</>rmdir(path)— Remove directory marker.</>rmtree(path,ignore_errors,onerror)— Remove directory and all its contents recursively.</>set_metadata(path,metadata)— Set object metadata.</>stat(path)(stat_result) — Get S3 object metadata.</>symlink_to(path,target)— Create symlink by storing target in metadata.</>touch(path,exist_ok)— Create empty file.</>walk(path)(Iterator) — Walk directory tree.</>write_bytes(path,data)— Write bytes to S3 object.</>write_text(path,data,encoding)— Write text to file.</>
read_text(path, encoding='utf-8') → str
Read file as text.
write_text(path, data, encoding='utf-8')
Write text to file.
exists(path) → bool
Check if S3 object exists.
read_bytes(path) → bytes
Read S3 object as bytes.
write_bytes(path, data)
Write bytes to S3 object.
delete(path)
Delete S3 object.
list_dir(path) → list
List S3 objects with prefix.
is_dir(path) → bool
Check if S3 path is a directory (has objects with prefix).
is_file(path) → bool
Check if S3 path is a file.
stat(path) → stat_result
Get S3 object metadata.
open(path, mode='r', encoding=None, **kwargs)
Open S3 object for reading/writing with streaming support.
path(str) — S3 path (s3://bucket/key)mode(str, optional) — File mode ('r', 'w', 'rb', 'wb', 'a', 'ab')encoding(Optional, optional) — Text encoding (for text modes)**kwargs(Any) — Additional arguments (chunk_size, upload_warning_threshold,upload_interval supported)
S3SyncFileHandle with streaming support
mkdir(path, parents=False, exist_ok=False)
Create a directory marker (empty object with trailing slash).
path(str) — S3 path (s3://bucket/path)parents(bool, optional) — If True, create parent directories as neededexist_ok(bool, optional) — If True, don't raise error if directory already exists
get_metadata(path)
Get object metadata.
path(str) — S3 path
Dictionary containing response metadata including 'Metadata' key with user metadata
set_metadata(path, metadata)
Set object metadata.
path(str) — S3 pathmetadata(dict) — Dictionary of metadata key-value pairs
is_symlink(path)
Check if object is a symlink (has symlink-target metadata).
path(str) — S3 path
True if symlink metadata exists
readlink(path)
Read symlink target from metadata.
path(str) — S3 path
Symlink target path
symlink_to(path, target)
Create symlink by storing target in metadata.
path(str) — S3 path for the symlinktarget(str) — Target path the symlink should point to
glob(path, pattern)
Glob for files matching pattern.
path(str) — Base S3 pathpattern(str) — Glob pattern (e.g., ".txt", "**/.py")
List of matching paths (as PanPath objects or strings)
walk(path)
Walk directory tree.
path(str) — Base S3 path
List of (dirpath, dirnames, filenames) tuples
touch(path, exist_ok=True)
Create empty file.
path(str) — S3 pathexist_ok(bool, optional) — If False, raise error if file exists
rename(source, target)
Rename/move file.
source(str) — Source S3 pathtarget(str) — Target S3 path
rmdir(path)
Remove directory marker.
path(str) — S3 path
rmtree(path, ignore_errors=False, onerror=None)
Remove directory and all its contents recursively.
path(str) — S3 pathignore_errors(bool, optional) — If True, errors are ignoredonerror(Optional, optional) — Callable that accepts (function, path, excinfo)
copy(source, target, follow_symlinks=True)
Copy file to target.
source(str) — Source S3 pathtarget(str) — Target S3 pathfollow_symlinks(bool, optional) — If False, symlinks are copied as symlinks (not dereferenced)
copytree(source, target, follow_symlinks=True)
Copy directory tree to target recursively.
source(str) — Source S3 pathtarget(str) — Target S3 pathfollow_symlinks(bool, optional) — If False, symlinks are copied as symlinks (not dereferenced)
panpath.s3_client.S3SyncFileHandle(client, bucket, blob, prefix, mode='r', encoding=None, chunk_size=4096, upload_warning_threshold=100, upload_interval=1.0)
Sync file handle for S3 with chunked streaming support.
Uses boto3's streaming API for efficient reading of large files.
closed(bool) — Check if file is closed.</>
__enter__()(SyncFileHandle) — Enter context manager.</>__exit__(exc_type,exc_val,exc_tb)— Exit async context manager.</>__iter__()(SyncFileHandle) — Support async iteration over lines.</>__next__()(Union) — Get next line in async iteration.</>close()— Close the file and flush write buffer to cloud storage.</>flush()— Flush write buffer to cloud storage.</>read(size)(Union) — Read and return up to size bytes/characters.</>readline(size)(Union) — Read and return one line from the file.</>readlines()(List) — Read and return all lines from the file.</>reset_stream()— Reset the underlying stream to the beginning.</>seek(offset,whence)(int) — Change stream position (forward seeking only).</>tell()(int) — Return current stream position.</>write(data)(int) — Write data to the file.</>writelines(lines)— Write a list of lines to the file.</>
flush()
Flush write buffer to cloud storage.
After open, all flushes append to existing content using provider-native append operations. The difference between 'w' and 'a' modes is that 'w' clears existing content on open, while 'a' preserves it.
reset_stream()
Reset the underlying stream to the beginning.
__enter__() → SyncFileHandle
Enter context manager.
__exit__(exc_type, exc_val, exc_tb)
Exit async context manager.
read(size=-1)
Read and return up to size bytes/characters.
size(int, optional) — Number of bytes/chars to read (-1 for all)
Data read from file
readline(size=-1) → Union
Read and return one line from the file.
readlines() → List
Read and return all lines from the file.
write(data) → int
Write data to the file.
writelines(lines)
Write a list of lines to the file.
close()
Close the file and flush write buffer to cloud storage.
__iter__() → SyncFileHandle
Support async iteration over lines.
__next__() → Union
Get next line in async iteration.
tell()
Return current stream position.
Current position in the file
seek(offset, whence=0)
Change stream position (forward seeking only).
offset(int) — Position offsetwhence(int, optional) — Reference point (0=start, 1=current, 2=end)
New absolute position
OSError— If backward seeking is attemptedValueError— If called in write mode or on closed file
Note
- Only forward seeking is supported due to streaming limitations
- SEEK_END (whence=2) is not supported as blob size may be unknown
- Backward seeking requires re-opening the stream