the mqs are storing a models.Task, which was not incorporating all the fields
that are in a task.Config. I would very much like to merge these two things,
but expect to do this in a future restructuring as both are used widely and
not cordoned off properly (Config has a channel, stdin, stdout, stderr -- and
isn't just a 'config', so to speak, as Task is).
Since a task.Config is what is used to actually run a container, the result of
the aforementioned deficiency was #193 where tasks are improperly configured
and ran (namely, memory wrong).
async tasks can still not be hot, they will be reverted to default format.
would also like to fix this (also part of restructuring). I actually started
doing this, hence the changes to those files (the surface area of the change
is small and discourages improper future use, so I've left what I've done).
this will:
closes#193closes#195closes#154
removes many unused fields in models.Task, since we have not implemented
retries. priority & delay are left, even though they are not used either,
the main goal of this is to resolve#193 and both these fields are strongly
plumbed into all the mqs, so punting on those two.
* functions: modify datastore to accomodate hot containers support
* functions: protocol between functions and hot containers
* functions: add hot containers clockwork
* fn: add hot containers support
* functions: add bounded concurrency
* functions: plug runners to sync and async interfaces
* functions: update documentation about the new env var
* functions: fix test flakiness
* functions: the runner is self-regulated, no need to set a number of runners
* functions: push the execution to the background on incoming requests
* functions: ensure async tasks are always on
* functions: add prioritization to tasks consumption
Ensure that Sync tasks are consumed before Async tasks. Also, fixes
termination races problems for free.
* functions: remove stale comments
* functions: improve mem availability calculation
* functions: parallel run for async tasks
* functions: check for memory availability before pulling async task
* functions: comment about rnr.hasAvailableMemory and sync.Cond
* functions: implement memory check for async runners using Cond vars
* functions: code grooming
- remove unnecessary goroutines
- fix stale docs
- reorganize import group
* Revert "functions: implement memory check for async runners using Cond vars"
This reverts commit 922e64032201a177c03ce6a46240925e3d35430d.
* Revert "functions: comment about rnr.hasAvailableMemory and sync.Cond"
This reverts commit 49ad7d52d341f12da9603b1a1df9d145871f0e0a.
* functions: set a minimum memory availability for sync
* functions: simplify the implementation by removing the priority queue
* functions: code grooming
- code deduplication
- review waitgroups Waits
Currently, async workers are started before HTTP interface is available
to get their requests. It fixes by ensuring that async workers are
started after HTTP interface is up.
Essentially we are getting rid of an error message during bootstrap:
ERRO[0000] Could not fetch task error=Get http://127.0.0.1:8080/tasks: dial tcp 127.0.0.1:8080: getsockopt: connection refused
* Reduce test verbosity
* Divert gin's log to the test buffer
* Divert stdlib's log to the test buffer
* Add bolt tests into log buffer
* Add a linebreak to improve log output layout
By default, BoltDB will hang while waiting to acquire lock to the
datafile, thus the users might find themselves waiting for something
but not what. The added timeout aims inform use about what's
happening.
Also this renames MQADR to TASKSRV, refactor configuration to read
environment variables. RunAsyncRunner now fills the gaps when
parsing TASKSRV.
Fixes#119