Personal tools
Skip to content. | Skip to navigation
Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code to be executed later, either just once or periodically. You can add new jobs or remove old ones on the fly as you please. If you store your jobs in a database, they will also survive scheduler restarts and maintain their state. When the scheduler is restarted, it will then run all the jobs it should have run while it was offline [1]. Among other things, APScheduler can be used as a cross-platform, application specific replacement to platform specific schedulers, such as the cron daemon or the Windows task scheduler. Please note, however, that APScheduler is not a daemon or service itself, nor does it come with any command line tools. It is primarily meant to be run inside existing applications. That said, APScheduler does provide some building blocks for you to build a scheduler service or to run a dedicated scheduler process. APScheduler has three built-in scheduling systems you can use: Cron-style scheduling (with optional start/end times) Interval-based execution (runs jobs on even intervals, with optional start/end times) One-off delayed execution (runs jobs once, on a set date/time) You can mix and match scheduling systems and the backends where the jobs are stored any way you like. Supported backends for storing jobs include: Memory SQLAlchemy (any RDBMS supported by SQLAlchemy works) MongoDB Redis APScheduler also integrates with several common Python frameworks, like: asyncio (PEP 3156) gevent Tornado Twisted Qt (using either PyQt or PySide) [1] The cutoff period for this is also configurable.
An open source asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. The execution units, called tasks, are executed concurrently on one or more worker nodes using multiprocessing, Eventlet or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready). Celery is used in production systems to process millions of tasks a day. Celery is written in Python, but the protocol can be implemented in any language. It can also operate with other languages using webhooks. The recommended message broker is RabbitMQ, but limited support for Redis, Beanstalk, MongoDB, CouchDB and databases (using SQLAlchemy or the Django ORM) is also available.
The Kolla project is a member of the OpenStack Big Tent Governance. Kolla's mission statement is: Kolla provides production-ready containers and deployment tools for operating OpenStack clouds. Kolla provides Docker containers and Ansible playbooks to meet Kolla's mission. Kolla is highly opinionated out of the box, but allows for complete customization. This permits operators with little experience to deploy OpenStack quickly and as experience grows modify the OpenStack configuration to suit the operator's exact requirements. Kolla provides images to deploy the following OpenStack projects: Aodh Ceilometer Cinder Designate Glance Gnocchi Heat Horizon Ironic Keystone Magnum Manila Mistral Murano Nova Neutron Swift Tempest Trove Zaqar As well as these infrastructure components: Ceph implementation for Cinder, Glance and Nova Openvswitch and Linuxbridge backends for Neutron MongoDB as a database backend for Ceilometer and Gnocchi RabbitMQ as a messaging backend for communication between services. HAProxy and Keepalived for high availability of services and their endpoints. MariaDB and Galera for highly available MySQL databases Heka A distributed and scalable logging system for openstack services. Docker Images The Docker images are built by the Kolla project maintainers. A detailed process for contributing to the images can be found in the image building guide. The Kolla developers build images in the kollaglue namespace for every tagged release and implement an Ansible deployment for many but not all of them. You can view the available images on Docker Hub or with the Docker CLI: $ sudo docker search kollaglue
MongoEngine is a Document-Object Mapper (think ORM, but for document databases) for working with MongoDB from Python. It uses a simple declarative API, similar to the Django ORM.
The Python driver for MongoDB.
GridFS is a storage specification for large objects in MongoDB.
Drill supports a variety of NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files. A single query can join data from multiple datastores. For example, you can join a user profile collection in MongoDB with a directory of event logs in Hadoop. Drill's datastore-aware optimizer automatically restructures a query plan to leverage the datastore's internal processing capabilities. In addition, Drill supports data locality, so it's a good idea to co-locate Drill and the datastore on the same nodes.