summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Travis should test with Python 3.8.2python3.8Selwin Ong2020-06-071-1/+1
|
* worker.work() should be tested with burst=TrueSelwin Ong2020-06-071-2/+2
|
* Job.requeue() doesn't return the job (#1265)ericatkin2020-06-051-1/+1
|
* Bump version to 1.4.2v1.4.2Selwin Ong2020-05-262-1/+5
|
* Attempt to fix hmset command for Redis < 4.0 (#1260)Selwin Ong2020-05-261-4/+6
| | | | | * Attempt to fix hmset command for Redis < 4.0 * Temporarily commented out hset(key, mapping) for backward compatibility reasons.
* Took into account DST when computing localtime zones. This wasn't ac… (#1258)Evan Ackmann2020-05-242-7/+36
| | | | | * Took into account DST when computing localtime zones. This wasn't accounted for when using non-UTC datetimes with queue.enqueue_at() * Updates tests with mocked timezones to test both scenarios
* Merge pull request #1259 from rq/multi-dependenciesSelwin Ong2020-05-234-27/+285
|\ | | | | Multi dependencies
| * Don't try to import cPicklemulti-dependenciesSelwin Ong2020-05-231-6/+1
| |
| * Merge remote-tracking branch 'origin/master' into multi-dependenciesSelwin Ong2020-05-2315-34/+81
| |\ | |/ |/|
* | Remove Python 2.7 from setup.pySelwin Ong2020-05-161-2/+0
| |
* | Bump version to 1.4.1v1.4.1Selwin Ong2020-05-162-1/+5
| |
* | Use pickle.HIGHEST_PROTOCOL by default (#1254)Bo Bayles2020-05-162-3/+19
| |
* | Avoid deprecation warnings on redis-py 3.5.0 hmset (#1253)Bo Bayles2020-05-163-5/+15
| |
* | Merge branch 'master' of github.com:rq/rqSelwin Ong2020-05-131-9/+4
|\ \
| * | Remove extraneous try/except (#1247)Michael Angeletti2020-05-131-9/+4
| | | | | | | | | The exception handling block was raising the caught exception in-place, which caused the original traceback info to be lost. Rather than replace `raise e` with `raise`, I simply removed the whole try / except, since no action was being taken in the except block.
* | | Bump version to 1.4.0v1.4.0Selwin Ong2020-05-133-2/+10
|/ /
* | Slightly increase job key timeout in monitor_work_horse()Selwin Ong2020-05-101-1/+1
| |
* | Parse job_id as keyword argument to delay() (#1236) (#1243)grayshirt2020-05-101-1/+2
| |
* | Fix typo in scheduling doc (#1245)Vincent Jacques2020-05-101-1/+1
| |
* | Always set job.started_at in monitor_work_horse (#1242)rmartin482020-05-101-1/+1
| | | | | | Co-authored-by: Russell Martin <russell@divipay.io>
* | Fix some code quality issues (#1235)Prajjwal Nijhara2020-05-034-3/+16
| |
* | Add the queue to the Redis queues set when scheduling a job (#1238)Pierre Mdawar2020-04-241-5/+7
| | | | | | | | | | * Add the queue to the queues set when scheduling a job * Fix the registry properties docstrings
| * pipeline calls to get dependency statusesthomas2020-05-131-5/+5
| |
| * Update Job#dependencies_are_met ...thomas2020-04-273-27/+20
| | | | | | | | ... such that it fetch all dependency status using SMEMBERS and HGET rather than SORT.
| * RevisionsThomas Matecki2020-04-162-6/+6
| | | | | | | | | | | | * Rename `dependent_jobs` to `jobs_to_enqueue` in queue.py * Rename `dependencies_job_ids` to `dependency_ids`. * Remove `as_text` (no more python2 support). Use `bytes.decode`
| * Change parameter name from `exclude` ...Thomas Matecki2020-04-162-10/+5
| | | | | | | | ...to `exclude_job_id`. Also make it a single id not a set.
| * Undo formatting for coverage statsthomas2020-04-162-26/+15
| |
| * Address Deleted Dependenciesthomas2020-04-163-36/+53
| | | | | | | | | | | | | | | | | | | | 1) Check if `created_at` when checking if dependencies are met. If `created_at` is `None` then the job has been deleted. This is sort of hack - we just need one of the fields on the job's hash that is ALWAYS populated. You can persist a job to redis without setting status... 2) Job#fetch_dependencies no longer raises NoSuchJob. If one of a job's dependencies has been deleted from Redis, it is not returned from `fetch_dependencies` and no exception is raised.
| * Revert move of status update in `Worker#handle_job_success`thomas2020-04-163-25/+62
| | | | | | | | | | | | When a job with dependents is _successful_ it's dependents are enqueued. Only if the FINISHing job's `result_ttl` is non-zero is the change in status persisted in Redis - that is, when each dependent job is enqueued, the _FINISHing_ job (,triggering the enqueueing,) has an _outdated_ status in redis. This avoids redundant call because if `result_ttl=0` then the job is deleted then deleted in `Job#cleanup`. In order to enqueue the dependents, we therefore _exclude_ the FINISHing job from the check if each dependents' dependencies have been met.
| * rename dependencies_finished to dependencies_are_metthomas2020-04-164-7/+8
| |
| * Change get_dependency_statuses to dependencies_finishedThomas Matecki2020-04-163-65/+35
| | | | | | | | | | Convert method on Job to return a boolean and rename. Also use fetch_many in Queue#enqueue_dependents.
| * Do not watch dependency key setThomas Matecki2020-04-162-30/+2
| |
| * Undo extra formatting changesthomas2020-04-161-17/+13
| |
| * Fix patches for python2thomas2020-04-161-40/+30
| |
| * Alway set status 'FINISHED' when job is Successfulthomas2020-04-163-22/+38
| | | | | | | | | | | | | | | | Method Queue#enqueue_dependents checks the status of all dependencies of all dependents, and enqueues those dependents for which all dependencies are FINISHED. The enqueue_dependents method WAS called from Worker#handle_job_success called BEFORE the status of the successful job was set in Redis, so enqueue_dependents explicitly excluded the _successful_ job from interrogation of dependency statuses as the it would never be true in the existing code path, but it was assumed that this would be final status after the current pipeline was executed. This commit changes Worker#handle_job_success so that it persists the status of the successful job to Redis, everytime a job completes(not only if it has a ttl) and does so before enqueue_dependents is called. This allows for enqueue_dependents to be less reliant on the out of band state of the current _successful job being handled_.
| * Only enqueue dependents for all dependencies are FINISHEDthomas2020-04-162-1/+114
| |
| * Create get_dependencies_statuses method on JobThomas Matecki2020-04-163-5/+179
|/ | | | This method shall be used in Queue#enqueue_dependendents to determine if all of a dependents' dependencies have been _FINISHED_.
* Accept lowercase logging level names and accept tuples when setting ↵Pierre Mdawar2020-04-162-2/+4
| | | | | | | exception handlers (#1233) * Accept lowercase logging level names * Accept both lists and tuples when setting Worker exception_handlers
* Implement Customizable Serializer Support (#1219)Babatunde Olusola2020-04-1613-87/+190
| | | | | | | | | | | | | | | | | | | | | | | | | * Implement Customizable Serializer Support * Refractor serializer instance methods * Update tests with other serializers * Edit function description * Edit function description * Raise appropriate exception * Update tests for better code coverage * Remove un-used imports and un-necessary code * Refractor resolve_serializer * Remove un-necessary alias from imports * Add documentation * Refractor tests, improve documentation
* Add sentry_debug and sentry_ca_certs params (#1229)Paweł Bąk2020-04-133-8/+17
| | | Co-authored-by: pawel bak <p.bak@inteliclinic.com>
* FailedJobRegistry.requeue() resets job.started_at and job.ended_at (#1227)Selwin Ong2020-04-015-11/+17
|
* registry.cleanup() now writes information to job.exc_info (#1226)Selwin Ong2020-03-313-1/+3
|
* Remove unused code (#1214)Selwin Ong2020-03-091-24/+0
|
* Bump version to 1.3.0v1.3.0Selwin Ong2020-03-092-1/+8
|
* fixing HerokuWorkerShutdownTestCase after #1194 (#1213)Samuel Colvin2020-03-092-3/+7
|
* Properly decode hostname in job.refresh()Selwin Ong2020-03-081-1/+1
|
* enqueue_at should support explicit args and kwargs (#1211)Selwin Ong2020-03-082-27/+51
|
* Pass job ID to error handlers (#1201)Seamus Mac Conaonaigh2020-02-281-0/+1
| | | | | The worker handles exceptions in the job outside of the job's own context, so an exception handler / logger cannot call `get_current_job()` to obtain the job ID. The job ID can be used to locate the job in the failed job registry, which allows useful behaviors such as linking to a failed job on a dashboard in an error report. Closes #1192.
* fix kill_horse will cause zombie processes (#1194)wevsty2020-02-251-0/+2
| | | | | | | | * fix kill_horse will cause zombie processes fix issue #1193 * Update tips message
* Merge branch 'master' of github.com:rq/rqSelwin Ong2020-02-251-0/+6
|\