fetch: Retrieve files from a network source

fetch: Retrieve files from a network source

The fetch module provides functions for fetching data from a network source. Low-level functions such as fetch_file do not interpret the returned content, while high-level functions such as fetch_web maps the content to a particular format and tries to create models from the content.

fetch_file(session, url, name, save_name, save_dir, *, uncompress=False, transmit_compressed=True, ignore_cache=False, check_certificates=True, timeout=60, error_status=True)

Experimental API . fetch file from URL

Parameters:
  • session – a ChimeraX Session

  • url – the URL to fetch

  • name – string to use to identify the data in status messages

  • save_name – where to save the contents of the URL

  • save_dir – the cache subdirectory or None for a temporary file

  • uncompress – contents are compressed (False)

  • ignore_cache – skip checking for cached file (False)

  • check_certificates – confirm https certificate (True)

  • timeout – maximum time to wait for http response

  • error_status – whether to give a status message if fetching fails

Returns:

the filename

Raises:

UserError – if unsuccessful

html_user_agent(app_dirs)

Experimental API . “Return HTML User-Agent header according to RFC 2068

Parameters:

app_dirs (a appdirs.AppDirs instance (chimerax.app_dirs)) –

Notes

The user agent may have single quote characters in it.

Typical use:

url = "http://www.example.com/example_file"
from urllib.request import URLError, Request
request = Request(url, unverifiable=True, headers={
    "User-Agent": html_user_agent(chimerax.app_dirs),
})
try:
    retrieve_url(request, filename, logger=session.logger)
except URLError as e:
    from chimerax.core.errors import UsereError
    raise UserError(str(e))
retrieve_url(url, filename, *, logger=None, uncompress=False, transmit_compressed=True, update=False, check_certificates=True, name=None, timeout=60, error_status=True)

Experimental API . Return requested URL in filename

Parameters:
  • url – the URL to retrive

  • filename – where to save the contents of the URL

  • name – string to use to identify the data in status messages

  • logger – logger instance to use for status and warning messages

  • uncompress – if true, then uncompress the content

  • update – if true, then existing file is okay if newer than web version

  • check_certificates – if true

  • timeout – maximum time to wait for http response

  • error_status – whether to give a status message if fetching fails

Returns:

None if an existing file, otherwise the content type

Raises:

urllib.request.URLError or EOFError – if unsuccessful

If ‘update’ and the filename already exists, fetch the HTTP headers for the URL and check the last modified date to see if there is a newer version or not. If there isn’t a newer version, return the filename. If there is a newer version, or if the filename does not exist, save the URL in the filename, and set the file’s modified date to the HTTP last modified date, and return the filename.