* GitHub Actions: Bump actions/download-artifact from 3 to 4
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)
---
updated-dependencies:
- dependency-name: actions/download-artifact
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
* Bump `actions/upload-artifact` from v3 to v4
* Group dependency updates to both `actions/{up,down}load-artifact` in a single Dependabot PR
This should allow updating them in tandem, and prevent
similar issues from occurring again.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Armel Soro <asoro@redhat.com>
* Label ServiceBinding tests, so we can run them separately
They require installing additional components in the cluster (OLM, SBO, ...).
* Add GH Workflow for most of our tests (including cluster-related tests)
This allows to easily test even multiple versions of Kubernetes if needed.
For easier reporting and visualisation (and also avoid rebuilding odo many times),
Podman tests have also been relocated in this same Workflow.
Notes:
I tried to spin up lightweight OpenShift clusters but gave up because of several issues:
- MicroShift: I tried to use the aio container image, but this one is no longer maintained and is pretty old version of OCP.
Trying to follow the official guidelines did not work either because a base RHEL OS is mandatory
- CRC/OpenShiftLocal with Microshift preset: didnt pass the pre-checks because it detected an issue with nested virtualization on the GH Runner.
* Drop unused code in helper_oc and use namespace instead of project
When testing on Microshift, it seems that the Project API is purposely not implemented on MicroShift
* Add '/notifications' endpoint for subscribing to server-sent events
* Generate server and client
* Try implementing the notification service endpoint
* Revert "Try implementing the notification service endpoint"
This does not seem to work because the generated server always responds
with application/json, and it is not possible to respond with a
different content-type.
This reverts commit cf3ce83677649763b8166c4847501c37246dd757.
* Revert "Generate server and client"
This reverts commit b985c007a0561edbe185adc3b9582e12aa3f072b.
* Revert "Add '/notifications' endpoint for subscribing to server-sent events"
This reverts commit c5c903329f13dbe4ec096d83b1c8624fd622bef3.
* Implement 'GET /notifications' SSE endpoint and logic to detect and notify Devfile changes
* Leverage EventSource to subscribe to Server Sent Events
Here, this is being used to automatically reload the Devfile in the YAML view
whenever the API server notifies of filesystem changes in the Devfile
(and related resources).
* Add Preference Client to apiserver CLI
This is needed to be able to persist Devfiles from the UI to the filesystem
* Add E2E test case
* fixup! Leverage EventSource to subscribe to Server Sent Events
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Limit the round-trips by sending the whole Devfile content in the DevfileUpdated event data
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* [Cypress] Make sure to wait for APi responses after visiting the home page
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Generate static UI
* fixup! [Cypress] Make sure to wait for APi responses after visiting the home page
---------
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Start the API Server from the UI component itself
* Store Cypress screenshots and videos as test artifacts upon test failures
This should make it easier to understand and investigate.
* Increase timeout to find element via getByDataCy
The default timeout used to cause certain flakiness at times.
* Install Terminal Report Plugin [1] to log useful information about Cypress runs
It outputs actions, intercepted requests, console messages and errors directly to stdout in a convenient format.
[1] https://github.com/archfz/cypress-terminal-report
* Revert "Increase timeout to find element via getByDataCy"
This reverts commit 410b5c6c3f.
* Intercept network calls when clearing or saving DevState and wait until we get successful responses
In some cases, clicking too quickly led to inconsistent behavior, where for example the Containers tab would not be up-to-date yet
* Disable Angular telemetry when running 'ng serve'
* Serve UI from api server
* Build UI static files + check generation with GHAction
* Update UI static files
* Use specific commit for verify-changed-files action
* Add pkg/apiserver-impl/ui to .gitattributes
* Ignore pkg/apiserver-impl/ui/** for sonar
* POST /devstate/container
* Implement POST /devstate/container
* Generate DELETE /devstate/container/{containerName}
* Implement DELETE /devstate/container/{containerName}
* Serve /devstate/image
* Implement /devstate/image
* Serve /devstate/resource
* Implement /devstate/resource
* Move Components specific code to components.go
* Serve /devstate/*command
* Implement /devstate/*command
* Serve /devstate/metadata
* Implement /devstate/metadata
* Serve devstate/chart
* Implement /devstate/chart
* Create a DevfileContent schema reference
* Use `DELETE /command/{name}` instead of `DELETE /*Command/{name}`
* Serve /devstate/command/move
* Implement /devstate/command/move
* Serve /devstate/command/{name}/[un]setDefault
* Implement /devstate/command/{name}/[un]setDefault
* serve /devstate/events
* Implement /devstate/events
* Serve /devstate/quantityValid
* Implement /devstate/quantityValid
* Add json tag to API result value
* Sets a proxy for the API
* Move calls from wasm to api (first part)
* Implement PUT /devsatte/devfile
* Move calls from wasm to api (end)
* Implement GET /devstate/devfile
* Implement DELETE /devstate/devfile
* At startup, get devfile from api, not from localStorage
* Rename service wasmGo -> devstate
* Remove wasm module
* Update to latest devfile-lifecycle version, license compatible
* Apply suggestions from code review
Co-authored-by: Armel Soro <armel@rm3l.org>
* Remove wasm from ui/{Makefile/devfile.yaml}
* Define DevfileContent into apispec
* Define required fields
* Generate API models from front
* Regenerate API server after spec changes
* Fix examples case
* Fix github action e2e tests not running
* Make target for all generated api code
---------
Co-authored-by: Armel Soro <armel@rm3l.org>
* Trigger website PR preview workflow only for changes in the 'docs/website' folder or in the Workflow itself
While this allowed to test 'odo deploy', it
makes more sense to avoid deploying the website uselessly if there are no changes in the website itself.
There are already integration or E2E tests testing 'odo deploy' on different projects.
* Do not require manual deployment approval if the PR comes from members of the odo team or certain robot accounts we rely on
* Add 'deploy' command in Devfile to support outer-loop case for the website
The goal is to leverage this for creating PR deploy previews
in an automated way.
* Add GitHub Workflow to create Deploy previews for PRs using 'odo deploy'
* Leverage the image-names-as-selector feature
* Add odo binary location to system path to make it easier to use it
* Do not change the Devfile name dynamically
We are using a robot account on quay.io,
which requires specific permissions per repository name,
which cannot be dynamic in this case.
Since we are scoping everything per namespace, it should be fine.
* Build nightly versions of odo and upload them to IBM Cloud Object Storage
* Document where the nightly builds can be downloaded and installed
* Allow to trigger the nightly build workflow manually if needed
* Add a '-nightly' suffix to the commit id included at build time
This will help users running 'odo' know
that they are running a nightly build, e.g.:
```
$ ./odo version
odo v3.11.0 (077397dbd-nightly)
```
* Use an arbitrary cron schedule in the night to avoid peak executions at midnight
Co-authored-by: Philippe Martin <contact@elol.fr>
---------
Co-authored-by: Philippe Martin <contact@elol.fr>
* Set Go version in go.mod
go mod edit -go=1.19
* Fix formatting issues reported by gofmt
* Fix SA1019 check (usage of deprecated "io/ioutil"), reported by golangci-lint
SA1019: "io/ioutil" has been deprecated since Go 1.16:
As of Go 1.16, the same functionality is now provided by package io or package os,
and those implementations should be preferred in new code.
See the specific function documentation for details. (staticcheck)
* Use Go 1.19 in our Dockerfiles
* Use Go 1.19 in the rpm-prepare.sh script
* Update the tag for the IBM Cloud CI image
* Make sure to fully delete the component resources after each "delete component" test spec
Otherwise, "odo delete component --running-in $mode" might leave some resources running
(on Podman especially).
For example, the Podman test that runs "delete component --running-in deploy"
(after starting and killing a Dev Session abruptly) might leave those Dev resources running.
This has been alleviated in CI via 9ebf766 by ensuring Ginkgo does not hang indefinitely and
by deleting those leftover resources.
But the issue was still persisting locally.
* Do not error out if there is no pod to stop in the last Podman test job step
* Stop containers after tests
* Use --output-interceptor-mode=none flag to not wait containers stopped
* Run podman pod inspect after tests
* Dedicated step for listing/stopping containers
* Rename SetProjectName into GetProjectName
Co-authored-by: Anand Singh <ansingh@redhat.com>
Co-authored-by: Parthvi Vala <pvala@redhat.com>
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Generate specific containers.conf file for each test spec using a dedicated engine namespace
Co-authored-by: Anand Singh <ansingh@redhat.com>
Co-authored-by: Parthvi Vala <pvala@redhat.com>
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Listen on random ports on Podman when '--random-ports' is used
This reduces the risks of port conflicts when running test specs in parallel.
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Exclude Gosec G404 (use of math/rand) rule
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Run Podman specs in parallel
* Output the Pod spec to be played by Podman depending on verbosity level
This will help debug potential issues.
* Use random name in 'using devfile that contains K8s resource to run it on podman' test
* Use random component name in sample java-quarkus project used in 'a hotReload capable project is used with odo dev' test
* Revert "Run Podman specs in parallel"
Parallelization works great on GitHub Actions, but I experimented a lot
of issues when running locally with a lot (~11) of parallel test nodes.
Not sure why exactly, but some containers created by Podman had a lot of
networking issues.
We can look into parallelizing the runs later in a subsequent PR.
This reverts commit 64d5d31248a62f355a32ca245ba399a723fdb22f.
* Allow overridding the number of parallel nodes for Podman integration tests
This way, we could be able to run them in parallel on GitHub
but sequentially (default) locally.
This will still benefit us by reducing the time it takes to run such tests on GitHub.
Meanwhile, we can look into the issues we have locally with parallelization.
Note that it is still possible to run them locally in parallel via
the PODMAN_EXEC_NODE env var.
Co-authored-by: Anand Singh <ansingh@redhat.com>
Co-authored-by: Parthvi Vala <pvala@redhat.com>
Co-authored-by: Philippe Martin <phmartin@redhat.com>
* Use podman generate kube instead podman kube generate
* get env from exec -it env (to be continued)
* Fix podman test
* Export function to make validate tests pass
* review
* Check podman tests result
Co-authored-by: Parthvi Vala <pvala@redhat.com>