scripts module

class scripts.AdobeAccountIDResetScript(_db=None)[source]

Bases: PatronInputScript

classmethod arg_parser(_db)[source]
do_run(*args, **kwargs)[source]
process_patron(patron)[source]

Delete all of a patron’s Credentials that contain an Adobe account ID _or_ connect the patron to a DelegatedPatronIdentifier that contains an Adobe account ID.

class scripts.AvailabilityRefreshScript(_db=None)[source]

Bases: IdentifierInputScript

Refresh the availability information for a LicensePool, direct from the license source.

do_run()[source]
refresh_availability(identifiers)[source]
class scripts.CacheFacetListsPerLane(_db=None, cmd_args=None, testing=False, manager=None, *args, **kwargs)[source]

Bases: CacheRepresentationPerLane

Cache the first two pages of every relevant facet list for this lane.

classmethod arg_parser(_db)[source]
do_generate(lane, facets, pagination, feed_class=None)[source]
facets(lane)[source]

This script covers a user-specified combination of facets, but it defaults to using every combination of available facets for the given lane with a certain sort order. This means every combination of availability, collection, and entry point. That’s a whole lot of feeds, which is why this script isn’t actually used – by the time we generate all of then, they’ve expired.

name = 'Cache paginated OPDS feed for each lane'
pagination(lane)[source]

This script covers a user-specified number of pages.

parse_args(cmd_args=None)[source]
class scripts.CacheMARCFiles(_db=None, cmd_args=None, *args, **kwargs)[source]

Bases: LaneSweeperScript

Generate and cache MARC files for each input library.

classmethod arg_parser(_db)[source]
name = 'Cache MARC files'
parse_args(cmd_args=None)[source]
process_lane(lane, exporter=None)[source]
process_library(library)[source]
should_process_lane(lane)[source]
should_process_library(library)[source]
class scripts.CacheOPDSGroupFeedPerLane(_db=None, cmd_args=None, testing=False, manager=None, *args, **kwargs)[source]

Bases: CacheRepresentationPerLane

do_generate(lane, facets, pagination, feed_class=None)[source]
facets(lane)[source]

Generate a Facets object for each of the library’s enabled entrypoints. This is the only way grouped feeds are ever generated, so there is no way to override this.

name = 'Cache OPDS grouped feed for each lane'
should_process_lane(lane)[source]
class scripts.CacheRepresentationPerLane(_db=None, cmd_args=None, testing=False, manager=None, *args, **kwargs)[source]

Bases: TimestampScript, LaneSweeperScript

ACCEPT_HEADER = 'application/atom+xml;profile=opds-catalog;kind=acquisition'
classmethod arg_parser(_db)[source]
cache_url(annotator, lane, languages)[source]
cache_url_method = None
facets(lane)[source]

Yield a Facets object for each set of facets this script is expected to handle. :param lane: The lane under consideration. (Different lanes may have different available facets.) :yield: A sequence of Facets objects.

generate_representation(*args, **kwargs)[source]
name = 'Cache one representation per lane'
pagination(lane)[source]

Yield a Pagination object for each page of a feed this script is expected to handle. :param lane: The lane under consideration. (Different lanes may have different pagination rules.) :yield: A sequence of Pagination objects.

parse_args(cmd_args=None)[source]
process_lane(lane)[source]

Generate a number of feeds for this lane. One feed will be generated for each combination of Facets and Pagination objects returned by facets() and pagination().

process_library(library)[source]
should_process_lane(lane)[source]
class scripts.CompileTranslationsScript(_db=None)[source]

Bases: Script

A script to combine translation files for circulation, core and the admin interface, and compile the result to be used by the app. The combination step is necessary because Flask-Babel does not support multiple domains yet.

run()[source]
class scripts.CreateWorksForIdentifiersScript(metadata_web_app_url=None)[source]

Bases: Script

Do the bare minimum to associate each Identifier with an Edition with title and author, so that we can calculate a permanent work ID.

BATCH_SIZE = 100
name = 'Create works for identifiers'
process_batch(batch)[source]
run()[source]
to_check = ['Overdrive ID', 'Bibliotheca ID', 'Gutenberg ID']
class scripts.DirectoryImportScript(*args, **kwargs)[source]

Bases: TimestampScript

Import some books into a collection, based on a file containing metadata and directories containing ebook and cover files.

annotate_metadata(collection_type, metadata, policy, cover_directory, ebook_directory, rights_uri)[source]

Add a CirculationData and possibly an extra LinkData to metadata

Parameters:
  • collection_type (CollectionType) – Collection’s type: open access/protected access

  • metadata (core.metadata_layer.Metadata) – Book’s metadata

  • policy (ReplacementPolicy) – Replacement policy

  • cover_directory (string) – Directory containing book covers

  • ebook_directory (string) – Directory containing books

  • rights_uri (string) – URI explaining the rights status of the works being uploaded

classmethod arg_parser(_db)[source]
do_run(cmd_args=None)[source]
load_circulation_data(collection_type, identifier, data_source, ebook_directory, mirrors, title, rights_uri)[source]

Loads an actual copy of a book from disk

Parameters:
  • collection_type (CollectionType) – Collection’s type: open access/protected access

  • identifier (core.model.identifier.Identifier,) – Book’s identifier

  • data_source (DataSource) – DataSource object

  • ebook_directory (string) – Directory containing books

  • mirrors (Dict[string, MirrorUploader]) – Dictionary containing mirrors for books and their covers

  • title (string) – Book’s title

  • rights_uri (string) – URI explaining the rights status of the works being uploaded

Returns:

A CirculationData that contains the book as an open-access download, or None if no such book can be found

Return type:

CirculationData

load_collection(collection_name, collection_type, data_source_name)[source]

Locate a Collection with the given name.

If the collection is found, it will be associated with the given data source and configured with existing covers and books mirror configurations.

Parameters:
  • collection_name (CollectionType) – Name of the Collection.

  • collection_type – Type of the collection: open access/proteceted access.

  • data_source_name (string) – Associate this data source with the Collection if it does not already have a data source. A DataSource object will be created if necessary.

Returns:

A 2-tuple (Collection, list of MirrorUploader instances)

Return type:

Tuple[core.model.collection.Collection, List[MirrorUploader]]

Load an actual book cover from disk.

Returns:

A LinkData containing a cover of the book, or None if no book cover can be found.

load_metadata(metadata_file, metadata_format, data_source_name, default_medium_type)[source]

Read a metadata file and convert the data into Metadata records.

name = 'Import new titles from a directory on disk'
run_with_arguments(collection_name, collection_type, data_source_name, metadata_file, metadata_format, cover_directory, ebook_directory, rights_uri, dry_run, default_medium_type=None)[source]
work_from_metadata(collection, collection_type, metadata, policy, *args, **kwargs)[source]

Creates a Work instance from metadata

Parameters:
Returns:

A 2-tuple of (Work object, LicensePool object)

Return type:

Tuple[core.model.work.Work, LicensePool]

class scripts.DisappearingBookReportScript(_db=None)[source]

Bases: Script

Print a TSV-format report on books that used to be in the collection, or should be in the collection, but aren’t.

do_run()[source]
explain(licensepool)[source]
format = '%Y-%m-%d'
investigate(licensepool)[source]

Find when the given LicensePool might have disappeared from the collection.

Parameters:

licensepool – A LicensePool.

Returns:

a 3-tuple (last_seen, title_removal_events, license_removal_events).

last_seen is the latest point at which we knew the book was circulating. If we never knew the book to be circulating, this is the first time we ever saw the LicensePool.

title_removal_events is a query that returns CirculationEvents in which this LicensePool was removed from the remote collection.

license_removal_events is a query that returns CirculationEvents in which LicensePool.licenses_owned went from having a positive number to being zero or a negative number.

class scripts.FillInAuthorScript(_db=None)[source]

Bases: MetadataCalculationScript

Fill in Edition.sort_author for Editions that have a list of Contributors, but no .sort_author.

This is a data repair script that should not need to be run regularly.

name = 'Fill in missing authors'
q()[source]
class scripts.InstanceInitializationScript(*args, **kwargs)[source]

Bases: TimestampScript

An idempotent script to initialize an instance of the Circulation Manager.

This script is intended for use in servers, Docker containers, etc, when the Circulation Manager app is being installed. It initializes the database and sets an appropriate alias on the ElasticSearch index.

Because it’s currently run every time a container is started, it must remain idempotent.

TEST_SQL = 'select * from timestamps limit 1'
do_run(ignore_search=False)[source]
name = 'Instance initialization'
run(*args, **kwargs)[source]
class scripts.LaneResetScript(_db=None)[source]

Bases: LibraryInputScript

Reset a library’s lanes based on language configuration or estimates of the library’s current collection.

classmethod arg_parser(_db)[source]
do_run(output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, **kwargs)[source]
process_library(library)[source]
class scripts.LanguageListScript(_db=None)[source]

Bases: LibraryInputScript

List all the languages with at least one non-open access work in the collection.

languages(library)[source]
Yield:

A list of output lines, one per language.

process_library(library)[source]
class scripts.LoanReaperScript(*args, **kwargs)[source]

Bases: TimestampScript

Remove expired loans and holds whose owners have not yet synced with the loan providers.

This stops the library from keeping a record of the final loans and holds of a patron who stopped using the circulation manager.

If a loan or (more likely) hold is removed incorrectly, it will be restored the next time the patron syncs their loans feed.

do_run()[source]
name = 'Remove expired loans and holds from local database'
class scripts.LocalAnalyticsExportScript(_db=None)[source]

Bases: Script

Export circulation events for a date range to a CSV file.

classmethod arg_parser(_db)[source]
do_run(output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, cmd_args=None, exporter=None)[source]
class scripts.MetadataCalculationScript(_db=None)[source]

Bases: Script

Force calculate_presentation() to be called on some set of Editions.

This assumes that the metadata is in already in the database and will fall into place if we just call Edition.calculate_presentation() and Edition.calculate_work() and Work.calculate_presentation().

Most of these will be data repair scripts that do not need to be run regularly.

name = 'Metadata calculation script'
q()[source]
run()[source]
class scripts.NYTBestSellerListsScript(include_history=False)[source]

Bases: TimestampScript

do_run()[source]
name = 'Update New York Times best-seller lists'
class scripts.NovelistSnapshotScript(*args, **kwargs)[source]

Bases: TimestampScript, LibraryInputScript

do_run(output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, *args, **kwargs)[source]
class scripts.ODLImportScript(_db=None, importer_class=None, monitor_class=None, protocol=None, *args, **kwargs)[source]

Bases: OPDSImportScript

Import information from the feed associated with an ODL collection.

IMPORTER_CLASS

alias of ODLImporter

MONITOR_CLASS

alias of ODLImportMonitor

PROTOCOL = 'ODL'
class scripts.OPDSForDistributorsImportScript(_db=None, importer_class=None, monitor_class=None, protocol=None, *args, **kwargs)[source]

Bases: OPDSImportScript

Import all books from the OPDS feed associated with a collection that requires authentication.

IMPORTER_CLASS

alias of OPDSForDistributorsImporter

MONITOR_CLASS

alias of OPDSForDistributorsImportMonitor

PROTOCOL = 'OPDS for Distributors'
class scripts.OPDSForDistributorsReaperScript(_db=None, importer_class=None, monitor_class=None, protocol=None, *args, **kwargs)[source]

Bases: OPDSImportScript

Get all books from the OPDS feed associated with a collection to find out if any have been removed.

IMPORTER_CLASS

alias of OPDSForDistributorsImporter

MONITOR_CLASS

alias of OPDSForDistributorsReaperMonitor

PROTOCOL = 'OPDS for Distributors'
class scripts.Script(_db=None)[source]

Bases: Script

load_config()[source]
class scripts.SharedODLImportScript(_db=None, importer_class=None, monitor_class=None, protocol=None, *args, **kwargs)[source]

Bases: OPDSImportScript

IMPORTER_CLASS

alias of SharedODLImporter

MONITOR_CLASS

alias of SharedODLImportMonitor

PROTOCOL = 'Shared ODL For Consortia'
class scripts.UpdateStaffPicksScript(_db=None)[source]

Bases: Script

DEFAULT_URL_TEMPLATE = 'https://docs.google.com/spreadsheets/d/%s/export?format=csv'
open()[source]
run()[source]