Commit Graph

1409 Commits

Author SHA1 Message Date
Tolga Ceylan
74a51f3f88 fn: reorg agent config (#853)
* fn: reorg agent config

*) Moving constants in agent to agent config, which helps
with testing, tuning.
*) Added max total cpu & memory for testing & clamping max
mem & cpu usage if needed.

* fn: adjust PipeIO time
* fn: for hot, cannot reliably test EndOfLogs in TestRouteRunnerExecution
2018-03-13 18:38:47 -07:00
CI
1988d92c83 fnserver: 0.3.372 release [skip ci] 2018-03-13 23:05:04 +00:00
Reed Allman
9eaf824398 add jaeger support, link hot container & req span (#840)
* add jaeger support, link hot container & req span

* adds jaeger support now with FN_JAEGER_URL, there's a simple tutorial in the
operating/metrics.md file now and it's pretty easy to get up and running.
* links a hot request span to a hot container span. when we change this to
sample at a lower ratio we'll need to finagle the hot container span to always
sample or something, otherwise we'll hide that info. at least, since we're
sampling at 100% for now if this is flipped on, can see freeze/unfreeze etc.
if they hit. this is useful for debugging. note that zipkin's exporter does
not follow the link at all, hence jaeger... and they're backed by the Cloud
Empire now (CNCF) so we'll probably use it anyway.

* vendor: add thrift for jaeger
2018-03-13 15:57:12 -07:00
CI
a7347a88b7 fnserver: 0.3.371 release [skip ci] 2018-03-13 22:52:34 +00:00
Reed Allman
7fbbd75349 fn, dockerd pid collector & go collector metrics (#837)
* fn, dockerd pid collector & go collector metrics

the prometheus client we're using has a nice collector for process metrics and
for go metrics. these are things we are very interested in operationally and
recently the benevolent team at opencensus made this possible again, so this
hooks it up for us with added dockerd sugar.

nannying the dockerd we're using should be super useful since that thing likes
to get carried away, it'll be nice to differentiate memory/cpu usage between
dockerd  / the host / fn. this will basically only work in a 'dind'
environment, or on a linux host that is running fn outside of docker that is
configured with the permissions to be able to check this.  otherwise, it will
simply fail. we also probably want disk i/o and net i/o information for that
as well, or at least it would be interesting to differentiate from the host,
but this isn't hooked up in the default collectors unfortunately.

dockerd:

```
dockerd_process_cpu_seconds_total 520.74
dockerd_process_max_fds 1.048576e+06
dockerd_process_resident_memory_bytes 9.033728e+07
dockerd_process_start_time_seconds 1.52029677322e+09
dockerd_process_virtual_memory_bytes 1.782509568e+09
```

fn:

```
fn_process_cpu_seconds_total 0.14
fn_process_max_fds 1024
fn_process_open_fds 12
fn_process_resident_memory_bytes 2.7348992e+07
fn_process_start_time_seconds 1.52056274238e+09
fn_process_virtual_memory_bytes 7.20068608e+08
```

go:

```
go_gc_duration_seconds{quantile="0"} 4.4194e-05
go_gc_duration_seconds{quantile="0.25"} 9.8118e-05
go_gc_duration_seconds{quantile="0.5"} 0.000105989
go_gc_duration_seconds{quantile="0.75"} 0.000106251
go_gc_duration_seconds{quantile="1"} 0.000157864
go_gc_duration_seconds_sum 0.000512416
go_gc_duration_seconds_count 5
go_goroutines 30
go_memstats_alloc_bytes 3.897696e+06
go_memstats_alloc_bytes_total 1.2916016e+07
go_memstats_buck_hash_sys_bytes 1.45034e+06
go_memstats_frees_total 75399
go_memstats_gc_sys_bytes 450560
go_memstats_heap_alloc_bytes 3.897696e+06
go_memstats_heap_idle_bytes 868352
go_memstats_heap_inuse_bytes 5.750784e+06
go_memstats_heap_objects 29925
go_memstats_heap_released_bytes_total 0
go_memstats_heap_sys_bytes 6.619136e+06
go_memstats_last_gc_time_seconds 1.520562751182639e+09
go_memstats_lookups_total 239
go_memstats_mallocs_total 105324
go_memstats_mcache_inuse_bytes 3472
go_memstats_mcache_sys_bytes 16384
go_memstats_mspan_inuse_bytes 90592
go_memstats_mspan_sys_bytes 98304
go_memstats_next_gc_bytes 6.31304e+06
go_memstats_other_sys_bytes 710548
go_memstats_stack_inuse_bytes 720896
go_memstats_stack_sys_bytes 720896
go_memstats_sys_bytes 1.0066168e+07
```

* cache pid until it stops working
2018-03-13 15:42:43 -07:00
CI
139f12b19b fnserver: 0.3.370 release [skip ci] 2018-03-13 21:42:32 +00:00
CI
05c1bf1468 fnserver: 0.3.369 release [skip ci] 2018-03-13 21:21:32 +00:00
Reed Allman
4084b727c0 phase 2: mattes/migrate -> migratex (#848)
* move mattes migrations to migratex

* changes format of migrations to migratex format
* updates test runner to use new interface (double checked this with printlns,
the tests go fully down and then up, and work on pg/mysql)

* remove mattes/migrate

* update tests from deps

* update readme

* fix other file extensions
2018-03-13 14:12:34 -07:00
CI
1f43545b63 fnserver: 0.3.368 release [skip ci] 2018-03-13 15:59:47 +00:00
Dario Domizioli
2c8b02c845 Make PureRunner an Agent so that it encapsulates its grpc server (#834)
* Refactor PureRunner as an Agent so that it encapsulates its grpc server
* Maintain a list of extra contexts for the server to select on to handle errors and cancellations
2018-03-13 15:51:32 +00:00
CI
fa828c8770 fnserver: 0.3.367 release [skip ci] 2018-03-12 22:06:14 +00:00
CI
ce4551e129 fnserver: 0.3.366 release [skip ci] 2018-03-12 18:26:32 +00:00
Tolga Ceylan
e80a06937b fn: timeouts and container exists should stop slot queuing (#843)
1) in theory it may be possible for an exited container to
requeue a slot, close this gap by always setting fatal error
for a slot if a container has exited.
2) when a client request times out or cancelled (client
disconnect, etc.) the slot should not be allowed to be
requeued and container should terminate to avoid accidental
mixing of previous response into next.
2018-03-12 11:18:55 -07:00
CI
080b46d4cb fnserver: 0.3.365 release [skip ci] 2018-03-12 17:39:55 +00:00
Reed Allman
96aa2a67ae phase 1 sqlx migrator (#825)
code is feature complete in the general sense, with minor TODO left.

this is just a patch with 'migratex' and does not use it for fn's migrations
yet, would like to get feedback prior to doing that.

presenting:

A migration library loosely based on pressly/goose and mattes/migrate design,
that does migrations across a smattering of sql databases by only accepting a
`*sqlx.DB`.

why?

* goose didn't support kindly allowing us to rebind transactions based on a
given db to various dialects or offer oracle support
* goose didn't support locking the db (maybe not needed with tx? it's late..
we may want to lock the whole db eventually?)
* goose requires us to do semi-complex migration to it from mattes/migrate
* mattes has stepped down as migrate maintainer and the project is in flux
* mattes/migrate did not allow us to define migrations in go and rebind to
different dialects, an issue since we need to insert ids in our own format and
can't define this in sql
* neither handled context plumbing and risked issues there for various
reasons (deadlock, etc).
* I think I'm forgetting 1 or 2

in the style of goose, this lets us define `*sqlx.Tx` up and down funcs in go
code, but uses mattes' migration table so we don't need to migrate that and
retains its lock behavior with added tx sugar and less errors. most
importantly, this code is terse, leveraging sqlx to support a lot of sql dbs
(unlike mattes) and we control this. there is one useful TODO to handle
migrations failing at startup more gracefully, in prod stuff like that will be
nice to have. open to discussion of putting in a separate library, the
landscape of go sql migrators is... really something.

TODO make test suite and test against sqlite3, pg, mysql [, oracledb] like we
have for our own unit tests. I'm thinking it's faster to wire up through
there and use our bevy of migrations?
2018-03-12 10:30:58 -07:00
CI
00eed1ef73 fnserver: 0.3.364 release [skip ci] 2018-03-12 17:26:33 +00:00
Tolga Ceylan
ea2b3f214c fn: enable log checks in runner test (#838) 2018-03-12 10:18:55 -07:00
CI
7ecba586d0 fnserver: 0.3.363 release [skip ci] 2018-03-12 13:44:52 +00:00
Andrea Rosa
3261e48843 Add a timeout to the net dialer (#844)
This change add the option to set a timeout for the dialer used in
making gRPC connection, with that we remove the check on the state of
the connections and therefore remove any potential race conditions.
2018-03-12 13:36:53 +00:00
CI
4f0c91b958 fnserver: 0.3.362 release [skip ci] 2018-03-12 09:46:34 +00:00
Andrea Rosa
43547a572f Check runner connection before sending requests (#831)
If a runner disconnect not gracefully it could happen that the
connection gets stuck in connecting mode, this change verifies the state
of the connection before starting to execute a call, if the client
connection is not ready we fail fast to give a change to the next runner
(if any) to execute the call.
2018-03-12 09:38:27 +00:00
CI
47383d9ff7 fnserver: 0.3.361 release [skip ci] 2018-03-10 00:59:59 +00:00
Dario Domizioli
9b28497cff Add a basic concurrency test for the dataplane system tests (#832)
Add a basic concurrency test for the dataplane system tests. Also remove some spurious logging.
2018-03-10 00:51:02 +00:00
Tolga Ceylan
afeb8e6f6a fn: json excess data check should ignore whitespace (#830)
* fn: json excess data check should ignore whitespace

* fn: adjustments and test case
2018-03-09 11:59:30 -08:00
CI
4b37342ad4 fnserver: 0.3.360 release [skip ci] 2018-03-09 18:04:44 +00:00
Tolga Ceylan
7177bf3923 fn: enable failing test back (#826)
* fn: enable failing test back

* fn: fortifying the stderr output

Modified limitWriter to discard excess data instead
of returning error, this is to allow stderr/stdout
pipes flowing to avoid head-of-line blocking or
data corruption in container stdout/stderr output stream.
2018-03-09 09:57:28 -08:00
Tolga Ceylan
f85294b0fc fn: log agent cfg with field names (#829) 2018-03-09 09:53:16 -08:00
CI
b2fc5243ee fnserver: 0.3.359 release [skip ci] 2018-03-09 00:53:14 +00:00
CI
79afb69c3a fnserver: 0.3.358 release [skip ci] 2018-03-08 23:54:24 +00:00
Tolga Ceylan
0ef0118150 fn: wait for async attach with success channel (#810)
* fn: wait for async attach with success channel

* fn: debug logs in test.sh

* fn: circleci test output as artifact

* fn: docker attach non-blocking adjustments

* fn: remove retry from risky NB attach
2018-03-08 15:46:32 -08:00
CI
e24865f704 fnserver: 0.3.357 release [skip ci] 2018-03-08 22:53:10 +00:00
Gerardo Viedma
8af57da7b2 Support load-balanced runner groups for multitenant compute isolation (#814)
* Initial stab at the protocol

* initial protocol sketch for node pool manager

* Added http header frame as a message

* Force the use of WithAgent variants when creating a server

* adds grpc models for node pool manager plus go deps

* Naming things is really hard

* Merge (and optionally purge) details received by the NPM

* WIP: starting to add the runner-side functionality of the new data plane

* WIP: Basic startup of grpc server for pure runner. Needs proper certs.

* Go fmt

* Initial agent for LB nodes.

* Agent implementation for LB nodes.

* Pass keys and certs to LB node agent.

* Remove accidentally left reference to env var.

* Add env variables for certificate files

* stub out the capacity and group membership server channels

* implement server-side runner manager service

* removes unused variable

* fixes build error

* splits up GetCall and GetLBGroupId

* Change LB node agent to use TLS connection.

* Encode call model as JSON to send to runner node.

* Use hybrid client in LB node agent.

This should provide access to get app and route information for the call
from an API node.

* More error handling on the pure runner side

* Tentative fix for GetCall problem: set deadlines correctly when reserving slot

* Connect loop for LB agent to runner nodes.

* Extract runner connection function in LB agent.

* drops committed capacity counts

* Bugfix - end state tracker only in submit

* Do logs properly

* adds first pass of tracking capacity metrics in agent

* maked memory capacity metric uint64

* maked memory capacity metric uint64

* removes use of old capacity field

* adds remove capacity call

* merges overwritten reconnect logic

* First pass of a NPM

Provide a service that talks to a (simulated) CP.

- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period

* add capacity update on schedule

* Send periodic capcacity metrics

Sending capcacity metrics to node pool manager

* splits grpc and api interfaces for capacity manager

* failure to advertise capacity shouldn't panic

* Add some instructions for starting DP/CP parts.

* Create the poolmanager server with TLS

* Use logrus

* Get npm compiling with cert fixups.

* Fix: pure runner should not start async processing

* brings runner, nulb and npm together

* Add field to acknowledgment to record slot allocation latency; fix a bug too

* iterating on pool manager locking issue

* raises timeout of placement retry loop

* Fix up NPM

Improve logging

Ensure that channels etc. are actually initialised in the structure
creation!

* Update the docs - runners GRPC port is 9120

* Bugfix: return runner pool accurately.

* Double locking

* Note purges as LBs stop talking to us

* Get the purging of old LBs working.

* Tweak: on restart, load runner set before making scaling decisions.

* more agent synchronization improvements

* Deal with teh CP pulling out active hosts from under us.

* lock at lbgroup level

* Send request and receive response from runner.

* Add capacity check right before slot reservation

* Pass the full Call into the receive loop.

* Wait for the data from the runner before finishing

* force runner list refresh every time

* Don't init db and mq for pure runners

* adds shutdown of npm

* fixes broken log line

* Extract an interface for the Predictor used by the NPM

* purge drained connections from npm

* Refactor of the LB agent into the agent package

* removes capacitytest wip

* Fix undefined err issue

* updating README for poolmanager set up

* ues retrying dial for lb to npm connections

* Rename lb_calls to lb_agent now that all functionality is there

* Use the right deadline and errors in LBAgent

* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping

* abstracting gRPCNodePool

* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping

* Add some init checks for LB and pure runner nodes

* adding some useful debug

* Fix default db and mq for lb node

* removes unreachable code, fixes typo

* Use datastore as logstore in API nodes.

This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.

* creates placement abstraction and moves capacity APIs to NodePool

* removed TODO, added logging

* Dial reconnections for LB <-> runners

LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.

* Add a status call to the Runner protocol

Stub at the moment. To be used for things like draindown, health checks.

* Remove comment.

* makes assign/release capacity lockless

* Fix hanging issue in lb agent when connections drop

* Add the CH hash from fnlb

Select this with FN_PLACER=ch when launching the LB.

* small improvement for locking on reloadLBGmembership

* Stabilise the list of Runenrs returned by NodePool

The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.

* Factor out a generator function for the Runners so that mocks can be injected

* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model

* fixes bug with nil runners

* Initial work for mocking things in tests

* fix for anonymouse go routine error

* fixing lb_test to compile

* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case

* Make GRPC port configurable, fix weird handling of web port too

* unit test reload Members

* check on runner creation failure

* adding nullRunner in case of failure during runner creation

* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.

* make capacityEntry private

* Change the runner gRPC bind address.

This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.

* Add support for multiple fixed runners to pool mgr

* Added harness for dataplane system tests, minor refactors

* Add Dockerfiles for components, along with docs.

* Doc fix: second runner needs a different name.

* Let us have three runners in system tests, why not

* The first system test running a function in API/LB/PureRunner mode

* Add unit test for Advertiser logic

* Fix issue with Pure Runner not sending the last data frame

* use config in models.Call as a temporary mechanism to override lb group ID

* make gofmt happy

* Updates documentation for how to configure lb groups for an app/route

* small refactor unit test

* Factor NodePool into its own package

* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations

* New dataplane with static runnerpool (#813)

Added static node pool as default implementation

* moved nullRunner to grpc package

* remove duplication in README

* fix go vet issues

* Fix server initialisation in api tests

* Tiny logging changes in pool manager.

Using `WithError` instead of `Errorf` when appropriate.

* Change some log levels in the pure runner

* fixing readme

* moves multitenant compute documentation

* adds introduction to multitenant readme

* Proper triggering of system tests in makefile

* Fix insructions about starting up the components

* Change db file for system tests to avoid contention in parallel tests

* fixes revisions from merge

* Fix merge issue with handling of reserved slot

* renaming nulb to lb in the doc and images folder

* better TryExec sleep logic clean shutdown

In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.

* System_test mysql port

set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.

* change the container name for system-test

* removes flaky test TestRouteRunnerExecution pending resolution by issue #796

* amend remove_containers to remove new added containers

* Rework capacity reservation logic at a higher level for now

* LB agent implements Submit rather than delegating.

* Fix go vet linting errors

* Changed a couple of error levels

* Fix formatting

* removes commmented out test

* adds snappy to vendor directory

* updates Gopkg and vendor directories, removing snappy and addhing siphash

* wait for db containers to come up before starting the tests

* make system tests start API node on 8085 to avoid port conflict with api_tests

* avoid port conflicts with api_test.sh which are run in parallel

* fixes postgres port conflict and issue with removal of old containers

* Remove spurious println
2018-03-08 14:45:19 -08:00
CI
d5da6fd8c5 fnserver: 0.3.356 release [skip ci] 2018-03-08 13:26:43 +00:00
Gerardo Viedma
1c49b3e38e Removes flaky runner test TestRouteRunnerIOPipes while #822 is resolved (#823)
* Removes flaky runner test TestRouteRunnerIOPipes while #822 is resolved

* removes flaky log test from TestRouteRunnerExecution
2018-03-08 13:18:42 +00:00
Tolga Ceylan
7677aad450 fn: I/O related improvements (#809)
*) I/O protocol parse issues should shutdown the container as the container
goes to inconsistent state between calls. (eg. next call may receive previous
calls left overs.)
*) Move ghost read/write code into io_utils in common.
*) Clean unused error from docker Wait()
*) We can catch one case in JSON, if there's remaining unparsed data in
decoder buffer, we can shut the container
*) stdout/stderr when container is not handling a request are now blocked if freezer is also enabled.
*) if a fatal err is set for slot, we do not requeue it and proceed to shutdown
*) added a test function for a few cases with freezer strict behavior
2018-03-07 15:09:24 -08:00
CI
f044e509fc fnserver: 0.3.355 release [skip ci] 2018-03-06 23:20:29 +00:00
Reed Allman
84df77a757 add benchmarks for ids (#816)
... I was curious
2018-03-06 15:13:00 -08:00
CI
54d6a59909 fnserver: 0.3.354 release [skip ci] 2018-03-05 17:42:24 +00:00
Reed Allman
206aa3c203 opentracing -> opencensus (#802)
* update vendor directory, add go.opencensus.io

* update imports

* oops

* s/opentracing/opencensus/ & remove prometheus / zipkin stuff & remove old stats

* the dep train rides again

* fix gin build

* deps from last guy

* start in on the agent metrics

* she builds

* remove tags for now, cardinality error is fussing. subscribe instead of register

* update to patched version of opencensus to proceed for now TODO switch to a release

* meh

fix imports

* println debug the bad boys

* lace it with the tags

* update deps again

* fix all inconsistent cardinality errors

* add our own logger

* fix init

* fix oom measure

* remove bugged removal code

* fix s3 measures

* fix prom handler nil
2018-03-05 09:35:28 -08:00
CI
a0eb75bfa5 fnserver: 0.3.353 release [skip ci] 2018-03-02 01:21:34 +00:00
Tolga Ceylan
89a1fc7c72 Response size clamp (#786)
*) Limit response http body or json response size to FN_MAX_RESPONSE_SIZE (default unlimited)
*) If limits are exceeded 502 is returned with 'body too large' in the error message
2018-03-01 17:14:50 -08:00
CI
a30c2c8f1b fnserver: 0.3.352 release [skip ci] 2018-03-01 20:57:09 +00:00
Tolga Ceylan
37ee5f6823 fn: runner tests and test-utils enhancements (#807)
This is prep-work for more tests to come.

*) remove http response -1, this will break in go 1.10
*) add docker id & hostname to fn-test-utils (will be useful
   to check/test which instance a request landed on.)
*) add container start/stop logs in fn-test-utils. To detect
   if/how we miss logs during container start & end.
2018-03-01 12:49:17 -08:00
CI
38742cc19b fnserver: 0.3.351 release [skip ci] 2018-03-01 02:42:36 +00:00
Reed Allman
997c7fce89 fix undefined string slot key (#806)
while escape analysis didn't lie that the bytes underlying this string escaped
to the heap, the reference to them died and led to us getting an undefined
byte array underlying the string.

sadly, this makes 4 allocs here (still down from 31), but only adds 100ns per
op. I still don't get why 'buf' and 'byts' escape to the heap, blaming faulty
escape analysis code.

this one is kind of impossible to write a test for.

found this from doing benchmarking stuff and was getting weird behavior at the
end of runs where calls didn't find a slot, ran bisect on a known-good commit
from a couple weeks ago and found that it was this. voila. this could explain
the variance from the slack dude's benchmarks, too. anyway, confirmed that
this fixes the issue.
2018-02-28 18:35:07 -08:00
CI
f5e3563202 fnserver: 0.3.350 release [skip ci] 2018-03-01 01:52:05 +00:00
Tolga Ceylan
a83f2cfbe8 fn: favor fn-test-utils over hello (to be decommissioned) (#761) 2018-02-28 17:44:13 -08:00
Reed Allman
a2ed1dfb2d push down app listeners to a datastore (#742)
* push down app listeners to a datastore

fnext.NewDatastore returns a datastore that wraps the appropriate methods for
AppListener in a Datastore implementation. this is more future proof than
needing to wrap every call of GetApp/UpdateApp/etc with the listeners, there
are a few places where this can happen and it seems like the AppListener
behavior is supposed to wrap the datastore, not just the front end methods
surrounding CRUD ops on an app. the hairy case that came up was when fiddling
with the create/update route business.

this changes the FireBeforeApp* ops to be an AppListener implementation itself
rather than having the Server itself expose certain methods to fire off the
app listeners, now they're on the datastore itself, which the server can
return the instance of.

small change to BeforeAppDelete/AfterAppDelete -- we were passing in a half
baked struct with only the name filled in and not filling in the fields
anywhere. this is mostly just misleading, we could fill in the app, but we
weren't and don't really want to, it's more to notify of an app deletion event
so that an extension can behave accordingly instead of letting a user inspect
the app. i know of 3 extensions and the changes required to update are very
small.

cleans up all the front end implementations FireBefore/FireAfter.

this seems potentially less flexible than previous version if we do want to
allow users some way to call the database methods without using the
extensions, but that's exactly the trade off, as far as the AppListener's are
described it seems heavily implied that this should be the case.

mostly a feeler, for the above reasons, but this was kind of odorous so just
went for it. we do need to lock in the extension api stuff.

* hand em an app that's been smokin the reefer
2018-02-28 17:04:00 -08:00
CI
10e313e524 fnserver: 0.3.349 release [skip ci] 2018-03-01 00:52:37 +00:00
Cem Ezberci
c149588a5b Remove replicated expvar handler (#805)
expvar package exports Handler which can be directly used instead of copying the expvarHandler function.
2018-02-28 16:43:54 -08:00