Version the documentation (#5502)

<!--
Thank you for opening a PR! Here are some things you need to know before submitting:

1. Please read our developer guideline: https://github.com/redhat-developer/odo/wiki/Developer-Guidelines
2. Label this PR accordingly with the '/kind' line
3. Ensure you have written and ran the appropriate tests: https://github.com/redhat-developer/odo/wiki/Writing-and-running-tests
4. Read how we approve and LGTM each PR: https://github.com/redhat-developer/odo/wiki/PR-Review

Documentation:

If you are pushing a change to documentation, please read: https://github.com/redhat-developer/odo/wiki/Contributing-to-Docs
-->

**What type of PR is this:**

<!--
Add one of the following kinds:
/kind bug
/kind feature
/kind cleanup
/kind tests

Feel free to use other [labels](https://github.com/redhat-developer/odo/labels) as needed. However one of the above labels must be present or the PR will not be reviewed. This instruction is for reviewers as well.
-->
/kind documentation

**What does this PR do / why we need it:**

This versions the documentation in order to display both 2.5.0 and 3.0.0
alpha1

In this PR we:

* Update the package dependencies, as there was a small bug with
  docusaurus displaying versioning
* Moves all documentation for 3.0.0-alpha1 into it's own folder
* Makes the old 2.5.0 documentation as the most current docs

**Which issue(s) this PR fixes:**
<!--
Specifying the issue will automatically close it when this PR is merged
-->

Fixes https://github.com/redhat-developer/odo/issues/5377

**PR acceptance criteria:**

- [X] Unit test

- [X] Integration test

- [X] Documentation

**How to test changes / Special notes to the reviewer:**
This commit is contained in:
Charlie Drage
2022-03-01 01:23:18 -05:00
committed by GitHub
parent 25f0f49ac4
commit 55e4f9554e
60 changed files with 11804 additions and 27413 deletions

View File

@@ -53,8 +53,6 @@ odo create nodejs --starter nodejs-starter
This will download the example template corresponding to the chosen component type (in the example above, `nodejs`) in your current directory (or the path provided with the `--context` flag).
If a starter project has its own devfile, then this devfile will be preserved.
## Using an existing devfile
If you want to create a new component from an existing devfile, you can do so by specifying the path to the devfile with the `--devfile` flag.

View File

@@ -1,6 +1,6 @@
---
title: odo delete
sidebar_position: 4
sidebar_position: 2
---
`odo delete` command is useful for deleting resources that are managed by odo.

View File

@@ -1,6 +1,6 @@
---
title: odo deploy
sidebar_position: 5
sidebar_position: 4
---
odo can be used to deploy components in a similar manner they would be deployed by a CI/CD system,

View File

@@ -1,6 +1,6 @@
---
title: odo link
sidebar_position: 7
sidebar_position: 4
---
`odo link` command helps link an odo component to an Operator backed service or another odo component. It does this by using [Service Binding Operator](https://github.com/redhat-developer/service-binding-operator). At the time of writing this, odo makes use of the Service Binding library and not the Operator itself to achieve the desired functionality.

View File

@@ -1,6 +1,6 @@
---
title: odo registry
sidebar_position: 8
sidebar_position: 5
---
odo uses the portable *devfile* format to describe the components. odo can connect to various devfile registries to download devfiles for different languages and frameworks.

View File

@@ -1,6 +1,6 @@
---
title: odo service
sidebar_position: 9
sidebar_position: 6
---
odo can deploy *services* with the help of *operators*.

View File

@@ -1,6 +1,6 @@
---
title: odo storage
sidebar_position: 10
sidebar_position: 7
---
odo lets users manage storage volumes attached to the components. A storage volume can be either an ephemeral volume using an `emptyDir` Kubernetes volume, or a [PVC](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim), which is a way for users to "claim" a persistent volume (such as a GCE PersistentDisk or an iSCSI volume) without understanding the details of the particular cloud environment. The persistent storage volume can be used to persist data across restarts and rebuilds of the component.

View File

@@ -1,38 +1,37 @@
---
title: Basics
sidebar_position: 2
sidebar_position: 3
---
# odo concepts
# Concepts of odo
odo abstracts Kubernetes concepts into a developer friendly terminology; in this document, we will take a look at these terminologies.
`odo` abstracts Kubernetes concepts into a developer friendly terminology; in this document, we will take a look at the following terminologies:
#### Application
An application in odo is a classic application developed with a [cloud-native approach](https://www.redhat.com/en/topics/cloud-native-apps) that is used to perform a particular task.
Examples of applications: Online Video Streaming, Hotel Reservation System, Online Shopping.
### Application
An application in `odo` is a classic application developed with a [cloud-native approach](https://www.redhat.com/en/topics/cloud-native-apps) that is used to perform a particular task.
#### Component
In the cloud-native architecture, an application is a collection of small, independent, and loosely coupled components; an odo component is one of these components.
Examples of applications: Online Video Streaming, Hotel Reservation System, Online Shopping.
Examples of components: API Backend, Web Frontend, Payment Backend.
### Component
In the cloud-native architecture, an application is a collection of small, independent, and loosely coupled components; a `odo` component is one of these components.
#### Project
A project helps achieve multi-tenancy: several applications can be run in the same cluster by different teams in different projects.
Examples of components: API Backend, Web Frontend, Payment Backend.
#### Context
A context is the directory on the system that contains the source code, tests, libraries and odo specific config files for a single component.
### Project
A project helps achieve multi-tenancy: several applications can be run in the same cluster by different teams in different projects.
#### URL
A URL exposes a component to be accessed from outside the cluster.
### Context
Context is the directory on the system that contains the source code, tests, libraries and `odo` specific config files for a single component.
#### Storage
A storage is a persistent storage in the cluster: it persists the data across restarts and rebuilds of a component.
### URL
A URL exposes a component to be accessed from outside the cluster.
#### Service
A service is an external application that a component can connect to or depend on to gain an additional functionality.
Example of services: MySQL, Redis.
### Storage
Storage is the persistent storage in the cluster: it persists the data across restarts and any rebuilds of a component.
### Service
Service is an external application that a component can connect to or depend on to gain a additional functionality.
Example of services: PostgreSQL, MySQL, Redis, RabbitMQ.
### Devfile
Devfile is a portable YAML file containing the definition of a component and its related URLs, storages and services. Visit [devfile.io](https://devfile.io/) for more information on devfiles.
#### Devfile
A devfile is a portable YAML file containing the definition of a component and its related URLs, storages and services. See Devfile <!--TODO: Add link to devfile in architecture section when ready--> to know more about devfile.

View File

@@ -4,167 +4,51 @@ sidebar_position: 1
---
# Setting up a Kubernetes cluster
## Introduction
This guide is helpful in setting up a development environment intended to be used with `odo`; this setup is not recommended for a production environment.
`odo` can be used with ANY Kubernetes cluster. However, this development environment will ensure complete coverage of all features of `odo`.
*Note that this guide is only helpful in setting up a development environment; this setup is not recommended for a production environment.*
## Prerequisites
* You have a Kubernetes cluster set up (such as [minikube](https://minikube.sigs.k8s.io/docs/start/))
* You have admin privileges to the cluster
* You have a Kubernetes cluster setup, this could for example be a [minikube](https://minikube.sigs.k8s.io/docs/start/) cluster.
* You have admin privileges to the cluster, since the Operator installation is only possible with an admin user.
**Important notes:** `odo` will use the __default__ ingress and storage provisioning on your cluster. If they have not been set correctly, see our [troubleshooting guide](/docs/getting-started/cluster-setup/kubernetes#troubleshooting) for more details.
## Enabling Ingress
To access an application externally, you will create _URLs_ using odo, which are implemented on a Kubernetes cluster by Ingress resources; installing an Ingress controller helps in using this feature on a Kubernetes cluster.
## Summary
* An Ingress controller in order to use `odo url create`
* Operator Lifecycle Manager in order to use `odo service create`
* (Optional) Service Binding Operator in order to use `odo link`
## Installing an Ingress controller
Creating an Ingress controller is required to use the `odo url create` feature.
This can be enabled by installing [an Ingress addon as per the Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/) such as: the built-in one on [minikube](https://minikube.sigs.k8s.io/) or [NGINX Ingress](https://kubernetes.github.io/ingress-nginx/).
**IMPORTANT:** `odo` cannot specify an Ingress controller and will use the *default* Ingress controller.
If you are unable to access your components, check that your [default Ingress controller](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) has been set correctly.
### Minikube
To install an Ingress controller on a minikube cluster, enable the **ingress** addon with the following command:
**Minikube:** To install an Ingress controller on a minikube cluster, enable the **ingress** addon with the following command:
```shell
minikube addons enable ingress
````
To learn more about ingress addon, see [the documentation on Kubernetes website](https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/).
### NGINX Ingress
**Other Kubernetes Cluster**: To enable the Ingress feature on a Kubernetes cluster _other than minikube_, using the NGINX Ingress controller see [the official NGINX Ingress controller installation documentation](https://kubernetes.github.io/ingress-nginx/deploy/).
To enable the Ingress feature on a Kubernetes cluster _other than minikube_, we reccomend to use the [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/).
On the default installation method, you will need to set NGINX Ingress as your [default Ingress controller](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do), so `odo` may deploy URLs correctly.
### Other Ingress controllers
For a list of all available Ingress controllers see the [the Ingress controller documentation](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
To use a different controller, see [the Ingress controller documentation](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
To learn more about enabling this feature on your cluster, see the [Ingress prerequisites](https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites) on the official kubernetes documentation.
## Installing the Operator Lifecycle Manager (OLM)
The Operator Lifecycle Manager(OLM) is a component of the Operator Framework, an open source toolkit to manage Kubernetes native applications, called Operators, in a streamlined and scalable way. [(Source)](https://olm.operatorframework.io/)
Installing the Operator Lifecycle Manager (OLM) is required to use the `odo service create` feature.
[//]: # (Move this section to Architecture > Service Binding or create a new Operators doc)
What are Operators?
>The Operator pattern aims to capture the key aim of a human operator who is managing a service or set of services. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.
>
>People who run workloads on Kubernetes often like to use automation to take care of repeatable tasks. The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.
> [(Source)](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#motivation)
The [Operator Lifecycle Manager (OLM)](https://olm.operatorframework.io/) is an open source toolkit to manage Kubernetes native applications, called Operators, in a streamlined and scalable way.
`odo` utilizes Operators in order to create and link services to applications.
The following command will install OLM cluster-wide as well as create two new namespaces: `olm` and `operators`.
[//]: # (Move until here)
To install an Operator, we will first need to install OLM [(Operator Lifecycle Manager)](https://olm.operatorframework.io/) on the cluster.
```shell
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.0/install.sh | bash -s v0.20.0
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.18.3/install.sh | bash -s v0.18.3
```
Running the script will take some time to install all the necessary resources in the Kubernetes cluster including the `OperatorGroup` resource.
Note: Check the OLM [release page](https://github.com/operator-framework/operator-lifecycle-manager/releases/) for the latest release.
Note: Check the OLM [release page](https://github.com/operator-framework/operator-lifecycle-manager/releases/) to use the latest version.
## Installing the Service Binding Operator
odo uses [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator) to provide the `odo link` feature which helps to connect an odo component to a service or another component.
### Installing an Operator
Installing an Operator allows you to install a service such as Postgres, Redis or DataDog.
To install an operator from the OperatorHub website:
1. Visit the [OperatorHub](https://operatorhub.io) website.
2. Search for an Operator of your choice.
3. Navigate to its detail page.
4. Click on **Install**.
5. Follow the instruction in the installation popup. Please make sure to install the Operator in your desired namespace or cluster-wide, depending on your choice and the Operator capability.
6. [Verify the Operator installation](#verifying-the-operator-installation).
### Verifying the Operator installation
Once the Operator is successfully installed on the cluster, you can use `odo` to verify the Operator installation and see the CRDs associated with it; run the following command:
```shell
odo catalog list services
```
The output may look similar to:
```shell
odo catalog list services
Services available through Operators
NAME CRDs
datadog-operator.v0.6.0 DatadogAgent, DatadogMetric, DatadogMonitor
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
```
If you do not see your installed Operator in the list, follow the [troubleshooting guide](#troubleshoot-the-operator-installation) to find the issue and debug it.
### Troubleshooting the Operator installation
There are two ways to confirm that the Operator has been installed properly.
The examples you may see in this guide use [Datadog Operator](https://operatorhub.io/operator/datadog-operator) and [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator).
1. Verify that its pod started and is in “Running” state.
```shell
kubectl get pods -n operators
```
The output may look similar to:
```shell
kubectl get pods -n operators
NAME READY STATUS RESTARTS AGE
datadog-operator-manager-5db67c7f4-hgb59 1/1 Running 0 2m13s
service-binding-operator-c8d7587b8-lxztx 1/1 Running 5 6d23h
```
2. Verify that the ClusterServiceVersion (csv) resource is in Succeeded or Installing phase.
```shell
kubectl get csv -n operators
```
The output may look similar to:
```shell
kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
datadog-operator.v0.6.0 Datadog Operator 0.6.0 datadog-operator.v0.5.0 Succeeded
service-binding-operator.v0.9.1 Service Binding Operator 0.9.1 service-binding-operator.v0.9.0 Succeeded
```
If you see the value under PHASE column to be anything other than _Installing_ or _Succeeded_, please take a look at the pods in `olm` namespace and ensure that the pod starting with name `operatorhubio-catalog` is in Running state:
```shell
kubectl get pods -n olm
NAME READY STATUS RESTARTS AGE
operatorhubio-catalog-x24dq 0/1 CrashLoopBackOff 6 9m40s
```
If you see output like above where the pod is in CrashLoopBackOff state or any other state other than Running, delete the pod:
```shell
kubectl delete pods/<operatorhubio-catalog-name> -n olm
```
### Checking to see if an Operator has been installed
For this example, we will check the [PostgreSQL Operator](https://operatorhub.io/operator/postgresql) installation.
Check `kubectl get csv` to see if your Operator exists:
```shell
$ kubectl get csv
NAME DISPLAY VERSION REPLACES PHASE
postgresoperator.v5.0.3 Crunchy Postgres for Kubernetes 5.0.3 postgresoperator.v5.0.2 Succeeded
```
If the `PHASE` is something other than `Succeeded`, you won't see it in `odo catalog list services` output, and you won't be able to create a working Operator backed service out of it either. You will have to wait patiently until `PHASE` says `Suceeded`.
## (Optional) Installing the Service Binding Operator
`odo` uses [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator) to provide the `odo link` feature which helps to connect an odo component to a service or another component.
The Service Binding Operator is _optional_ and is used to provide extra metadata support for `odo` deployments.
Operators can be installed in a specific namespace or across the cluster-wide.
Operators can be installed in a specific namespace or across the cluster(i.e. in all the namespaces).
```shell
kubectl create -f https://operatorhub.io/install/service-binding-operator.yaml
```
@@ -174,36 +58,65 @@ If you want to access this resource from other namespaces as well, add your targ
See [Verifying the Operator installation](#verifying-the-operator-installation) to ensure that the Operator was installed successfully.
## Troubleshooting
## Installing an Operator
To install an operator from the OperatorHub website:
1. Visit the [OperatorHub](https://operatorhub.io) website.
2. Search for an Operator of your choice.
3. Navigate to its detail page.
4. Click on **Install**.
5. Follow the instruction in the installation popup. Please make sure to install the Operator in your desired namespace or cluster-wide, depending on your choice and the Operator capability.
6. [Verify the Operator installation](#verifying-the-operator-installation).
### Confirming your Ingress Controller functionality
## Verifying the Operator installation
Wait for a few seconds for the Operator to install.
`odo` will use the *default* Ingress Controller. By default, when you install an Ingress Controller such as [NGINX Ingress](https://kubernetes.github.io/ingress-nginx/), it will *not* be set as the default.
You must set it as the default Ingress Controller by modifying the annotation your IngressClass:
```sh
kubectl get IngressClass -A
kubectl edit IngressClass/YOUR-INGRESS -n YOUR-NAMESPACE
Once the Operator is successfully installed on the cluster, you can use `odo` to verify the Operator installation and see the CRDs associated with it; run the following command:
```shell
odo catalog list services
```
And add the following annotation:
```yaml
annotation:
ingressclass.kubernetes.io/is-default-class: "true"
The output can look similar to:
```shell
$ odo catalog list services
Services available through Operators
NAME CRDs
datadog-operator.v0.6.0 DatadogAgent, DatadogMetric, DatadogMonitor
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
```
If you do not see your installed Operator in the list, follow the [troubleshooting guide](#troubleshoot-the-operator-installation) to find the issue and debug it.
### Confirming your Storage Provisioning functionality
## Troubleshooting the Operator installation
There are two ways to confirm that the Operator has been installed properly.
The examples you may see in this guide use [Datadog Operator](https://operatorhub.io/operator/datadog-operator) and [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator).
1. Verify that its pod started and is in “Running” state.
```shell
kubectl get pods -n operators
```
The output can look similar to:
```shell
$ kubectl get pods -n operators
NAME READY STATUS RESTARTS AGE
datadog-operator-manager-5db67c7f4-hgb59 1/1 Running 0 2m13s
service-binding-operator-c8d7587b8-lxztx 1/1 Running 5 6d23h
```
2. Verify that the ClusterServiceVersion (csv) resource is in Succeeded or Installing phase.
```shell
kubectl get csv -n operators
```
The output can look similar to the following:
```shell
$ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
datadog-operator.v0.6.0 Datadog Operator 0.6.0 datadog-operator.v0.5.0 Succeeded
service-binding-operator.v0.9.1 Service Binding Operator 0.9.1 service-binding-operator.v0.9.0 Succeeded
```
`odo` deploys with [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). By default, when you install a [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) such as [GlusterFS](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs), it will *not* be set as the default.
You must set it as the default storage provisioner by modifying the annotation your StorageClass:
```sh
kubectl get StorageClass -A
kubectl edit StorageClass/YOUR-STORAGE-CLASS -n YOUR-NAMESPACE
```
And add the following annotation:
```yaml
annotation:
storageclass.kubernetes.io/is-default-class: "true"
```
If you see the value under PHASE column to be anything other than _Installing_ or _Succeeded_, please take a look at the pods in `olm` namespace and ensure that the pod starting with name `operatorhubio-catalog` is in Running state:
```shell
$ kubectl get pods -n olm
NAME READY STATUS RESTARTS AGE
operatorhubio-catalog-x24dq 0/1 CrashLoopBackOff 6 9m40s
```
If you see output like above where the pod is in CrashLoopBackOff state or any other state other than Running, delete the pod:
```shell
kubectl delete pods/<operatorhubio-catalog-name> -n olm
```

View File

@@ -3,58 +3,23 @@ title: OpenShift
sidebar_position: 2
---
# Setting up a OpenShift cluster
## Introduction
This guide is helpful in setting up a development environment intended to be used with `odo`; this setup is not recommended for a production environment.
# Setup an OpenShift cluster
*Note that this guide is only helpful in setting up a development environment; this setup is not recommended for a production environment.*
## Prerequisites
* You have a OpenShift cluster set up (such as [crc](https://crc.dev/crc/#installing-codeready-containers_gsg))
* You have admin privileges to the cluster
* You have an OpenShift cluster setup, this could for example be a [crc](https://crc.dev/crc/#installing-codeready-containers_gsg) cluster.
* You have admin privileges to the cluster, since Operator installation is only possible with an admin user.
## Summary
* An Operator in order to use `odo service`
* (Optional) Service Binding Operator in order to use `odo link`
[//]: # (Move this section to Architecture > Service Binding or create a new Operators doc)
**What are Operators?**
>The Operator pattern aims to capture the key aim of a human operator who is managing a service or set of services. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.
>
>People who run workloads on Kubernetes often like to use automation to take care of repeatable tasks. The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.
> [(Source)](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#motivation)
[//]: # (Move until here)
## Installing an Operator
Installing an Operator allows you to install a service such as PostgreSQL, Redis or DataDog.
To install an Operator from the OpenShift web console:
1. Login to the OpenShift web console with admin, and navigate to Operators > OperatorHub.
2. Make sure that the Project is set to All Projects.
3. Search for an Operator of your choice in the search box under **All Items**.
4. Click on the Operator; this should open a side pane.
5. Click on the **Install** button on the side pane; this should open an **Install Operator** page.
6. Set the **Installation mode**, **Installed Namespace** and **Approval Strategy** as per your requirement.
7. Click on the **Install** button.
8. Wait until the Operator is installed.
9. Once the Operator is installed, you should see _**Installed operator - ready for use**_, and a **View Operator** button appears on the page.
10. Click on the **View Operator** button; this should take you to Operators > Installed Operators > Operator details page, and you should be able to see details of your Operator.
### Verifying the Operator installation
Once the Operator is successfully installed on the cluster, you can use `odo` to verify the Operator installation and see the CRDs associated with it; run the following command:
```shell
odo catalog list services
```
The output may look similar to:
```shell
odo catalog list services
Services available through Operators
NAME CRDs
datadog-operator.v0.6.0 DatadogAgent, DatadogMetric, DatadogMonitor
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
```
## (Optional) Installing the Service Binding Operator
`odo` uses [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator) to provide the `odo link` feature which helps to connect an odo component to a service or another component.
The Service Binding Operator is _optional_ and is used to provide extra metadata support for `odo` deployments.
## Installing the Service Binding Operator
odo uses [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator) to provide the `odo link` feature which helps connect an odo component to a service or another component.
To install the Service Binding Operator from the OpenShift web console:
1. Login to the OpenShift web console with admin, and navigate to Operators > OperatorHub.
@@ -68,3 +33,29 @@ To install the Service Binding Operator from the OpenShift web console:
9. Once the Operator is installed, you should see **_Installed operator - ready for use_**, and a **View Operator** button appears on the page.
10. Click on the **View Operator** button; this should take you to Operators > Installed Operators > Operator details page, and you should be able to see details of your Operator.
## Installing an Operator
To install an Operator from the OpenShift web console:
1. Login to the OpenShift web console with admin, and navigate to Operators > OperatorHub.
2. Make sure that the Project is set to All Projects.
3. Search for an Operator of your choice in the search box under **All Items**.
4. Click on the Operator; this should open a side pane.
5. Click on the **Install** button on the side pane; this should open an **Install Operator** page.
6. Set the **Installation mode**, **Installed Namespace** and **Approval Strategy** as per your requirement.
7. Click on the **Install** button.
8. Wait until the Operator is installed.
9. Once the Operator is installed, you should see _**Installed operator - ready for use**_, and a **View Operator** button appears on the page.
10. Click on the **View Operator** button; this should take you to Operators > Installed Operators > Operator details page, and you should be able to see details of your Operator.
## Verifying the Operator installation
Once the Operator is successfully installed on the cluster, you can also use `odo` to verify the Operator installation and see the CRDs associated with it; run the following command:
```shell
odo catalog list services
```
The output can look similar to:
```shell
$ odo catalog list services
Services available through Operators
NAME CRDs
datadog-operator.v0.6.0 DatadogAgent, DatadogMetric, DatadogMonitor
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
```

View File

@@ -59,9 +59,10 @@ Example:
$ odo preference view
PARAMETER CURRENT_VALUE
UpdateNotification
NamePrefix
Timeout
BuildTimeout
PushTimeout
RegistryCacheTime
Ephemeral
ConsentTelemetry
```
@@ -93,12 +94,12 @@ Unsetting a preference key sets it to an empty value in the preference file. odo
### Preference Key Table
| Preference | Description | Default |
|--------------------|--------------------------------------------------------------------------------|------------------------|
| UpdateNotification | Control whether a notification to update odo is shown | True |
| NamePrefix | Set a default name prefix for an odo resource (component, storage, etc) | Current directory name |
| Timeout | Timeout for Kubernetes server connection check | 1 second |
| PushTimeout | Timeout for waiting for a component to start | 240 seconds |
| RegistryCacheTime | For how long (in minutes) odo will cache information from the Devfile registry | 4 Minutes |
| Ephemeral | Control whether odo should create a emptyDir volume to store source code | True |
| ConsentTelemetry | Control whether odo can collect telemetry for the user's odo usage | False |
| Preference | Description | Default |
| --------------------- | ------------------------------------------------------------------------- | ------------------------- |
| UpdateNotification | Control whether a notification to update odo is shown | True |
| NamePrefix | Set a default name prefix for an odo resource (component, storage, etc) | Current directory name |
| Timeout | Timeout for OpenShift server connection check | 1 second |
| BuildTimeout | Timeout for waiting for a build of the git component to complete | 300 seconds |
| PushTimeout | Timeout for waiting for a component to start | 240 seconds |
| Ephemeral | Control whether odo should create a emptyDir volume to store source code | True |
| ConsentTelemetry | Control whether odo can collect telemetry for the user's odo usage | False |

View File

@@ -1,32 +1,39 @@
---
title: Features
sidebar_position: 1
sidebar_position: 2
---
# Features of odo
# Features provided by odo
By using `odo`, application developers can develop, test, debug, and deploy microservices based applications on Kubernetes without having a deep understanding of the platform.
By using odo, application developers can develop, test, debug, and deploy microservices based applications on Kubernetes without having a deep understanding of the platform.
`odo` follows *create and push* workflow. As a user, when you *create*, the information (or manifest) is stored in a configuration file. When you *push* it gets created on the Kubernetes cluster. All of this gets stored in the Kubernetes API for seamless accessability and function.
odo follows "create and push" workflow for almost everything. It means, as a user, when you "create", something the information (or manifest) is stored in a configuration file, and then upon doing a "push" it gets created on the Kubernetes cluster. You can take an existing git repository and create an odo component from it, which can be pushed to a Kubernetes cluster.
`odo` uses *deploy and link* commands to link components and services together. `odo` achieves this by creating and deploying services based on [Kubernetes Operators](https://github.com/operator-framework/) in the cluster. Services can be created using any of the operators available on [OperatorHub.io](https://operatorhub.io). Upon linking this service, `odo` injects the service configuration into the service. Your application can then use this configuration to communicate with the Operator backed service.
odo helps "deploy and link" multiple components and services with each other. Using odo, developers can create and deploy services based on [Kubernetes Operators](https://github.com/operator-framework/) in their development cluster. These services can be created using any of the Operators available on [OperatorHub.io](https://operatorhub.io). Next, upon linking this service, odo injects the service configuration into the microservice created using odo. Your application can use this configuration to communicate with the Operator backed service.
### What can `odo` do?
### What can odo do?
Below is a summary of what `odo` can do with your Kubernetes cluster:
odo uses container images to run the microservices in the cluster.
* Create a new manifest or existing one to deploy applications on Kubernetes cluster
* Provide commands to create and update the manifest without diving into Kubernetes configuration files
* Securely expose the application running on Kubernetes cluster to access it from developer's machine
* Add and remove additional storage to the application on Kubernetes cluster
* Create [Operator](https://github.com/operator-framework/) backed services and link with them
* Create a link between multiple microservices deployed as `odo` components
* Debug remote applications deployed using `odo` from the IDE
* Run tests on the applications deployed on Kubernetes
Full details of what each odo command is capable of doing can be found in the "Command Reference" sections.
Below is a summary of odo's most important capabilities:
* Create a manifest to deploy applications on Kubernetes cluster; odo creates the manifest for existing projects as well as new ones.
* No need to interact with YAML configurations; odo provides commands to create and update the manifest.
* Securely expose the application running on Kubernetes cluster to access it from developer's machine.
* Add and remove additional storage to the application on Kubernetes cluster.
* Create [Operator](https://github.com/operator-framework/) backed services and link with them.
* Create a link between multiple microservices deployed as odo components.
* Debug remote applications deployed using odo from the IDE.
* Run tests on the applications deployed on Kubernetes.
Take a look at the "Using odo" documentation for in-depth guides on doing advanced commands with `odo`.
Take a look at "Using odo" section for guides on doing various things using odo.
### What features to expect in odo?
For a quick high level summary of the features we are planning to add, take a look at odo's [milestones on GitHub](https://github.com/redhat-developer/odo/milestones).
We are working on some exciting features like:
* Linking to services created using Helm package manager.
* Create `odo deploy` command to transition from inner loop to outer loop.
* Support for Knative eventing.
For a quick high level summary of the features we are planning to add, take a look at odo's [milestones on GitHub](https://github.com/redhat-developer/odo/milestones).

View File

@@ -1,256 +1,53 @@
---
title: Installation
sidebar_position: 3
sidebar_position: 4
---
`odo` can be used as either a [CLI tool](/docs/getting-started/installation#cli-binary-installation) or an [IDE plugin](/docs/getting-started/installation#ide-installation) on Mac, Windows or Linux.
odo can be used as a CLI tool and as an IDE plugin; it can be run on Linux, Windows and Mac systems.
## CLI installation
## CLI Binary installation
odo supports amd64 architecture for Linux, Mac and Windows.
Additionally, it also supports amd64, arm64, s390x, and ppc64le architectures for Linux.
Each release is *signed*, *checksummed*, *verified*, and then pushed to our [binary mirror](https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/).
See the [release page](https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/) for more information.
For more information on the changes of each release, they can be viewed either on [GitHub](https://github.com/redhat-developer/odo/releases) or the [blog](/blog).
### Linux
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs
defaultValue="amd64"
values={[
{label: 'Intel / AMD 64', value: 'amd64'},
{label: 'ARM 64', value: 'arm64'},
{label: 'PowerPC', value: 'ppc64le'},
{label: 'IBM Z', value: 's390x'},
]}>
<TabItem value="amd64">
Installing `odo` on `amd64` architecture:
1. Download the latest release from the mirror:
### Installing odo on Linux/Mac
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo
OS="$(uname | tr '[:upper:]' '[:lower:]')"
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-$OS-$ARCH -o odo
sudo install odo /usr/local/bin/
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
### Installing odo on Windows
1. Download the [odo-windows-amd64.exe](https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-windows-amd64.exe) file.
2. Rename the downloaded file to odo.exe and move it to a folder of choice, for example `C:\odo`.
3. Add the location of odo.exe to `%PATH%` variable (refer to the steps below).
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
#### Setting the PATH variable in Windows 10
1. Click **Search** and type `env` or `environment`.
2. Select **Edit environment variables for your account**.
3. Select **Path** from the **Variable** section and click **Edit**.
4. Click **New**, add the location where you copied the odo binary (e.g. `C:\odo` in [Step 2 of Installation](#installing-odo-on-windows) into the field or click **Browse** and select the directory, and click **OK**.
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
#### Setting the PATH variable in Windows 7/8
1. Click **Start** and in the Search box types `Advanced System Settings`.
2. Select **Advanced systems settings** and click the **Environment Variables** button at the bottom.
3. Select the **Path** variable from the **System variables** section and click **Edit**.
4. Scroll to the end of the **Variable value** and add `;` followed by the location where you copied the odo binary (e.g. `C:\odo` in [Step 2 of Installation](#installing-odo-on-windows) and click **OK**.
5. Click **OK** to close the **Environment Variables** dialog.
6. Click **OK** to close the **System Properties** dialog.
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
## Installing odo in Visual Studio Code (VSCode)
The [OpenShift VSCode extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-openshift-connector) uses both odo and oc binary to interact with Kubernetes or OpenShift cluster.
1. Open VS Code.
2. Launch VS Code Quick Open (Ctrl+P)
3. Paste the following command:
```shell
ext install redhat.vscode-openshift-connector
```
<TabItem value="arm64">
Installing `odo` on `arm64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-arm64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-arm64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="ppc64le">
Installing `odo` on `ppc64le` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-ppc64le -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-ppc64le.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="s390x">
Installing `odo` on `s390x` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-s390x -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-s390x.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
</Tabs>
---
### MacOS
<Tabs
defaultValue="intel"
values={[
{label: 'Intel', value: 'intel'},
{label: 'Apple Silicon', value: 'arm'},
]}>
<TabItem value="intel">
Installing `odo` on `amd64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
chmod +x ./odo
sudo mv ./odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="arm">
Installing `odo` on `arm64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-arm64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-arm64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
chmod +x ./odo
sudo mv ./odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
</Tabs>
---
### Windows
1. Open a PowerShell terminal
2. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-windows-amd64.exe -o odo.exe
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-windows-amd64.exe.sha256 -o odo.exe.sha256
# Visually compare the output of both files
Get-FileHash odo.exe
type odo.exe.sha256
```
4. Add the binary to your `PATH`
### Installing from source code
## Installing from source
1. Clone the repository and cd into it.
```shell
git clone https://github.com/redhat-developer/odo.git
@@ -276,14 +73,3 @@ type odo.exe.sha256
```shell
odo version
```
## IDE Installation
### Installing `odo` in Visual Studio Code (VSCode)
The [OpenShift VSCode extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-openshift-connector) uses both `odo` and `oc` binary to interact with Kubernetes or OpenShift cluster.
1. Open VS Code.
2. Launch VS Code Quick Open (Ctrl+P)
3. Paste the following command:
```shell
ext install redhat.vscode-openshift-connector
```

View File

@@ -1,226 +1,163 @@
---
title: Quickstart Guide
sidebar_position: 5
title: Quickstart
sidebar_position: 3
---
# Quickstart Guide
In this guide, we will be using odo to set up a todo application based on Java Spring Boot framework for the backend/APIs, ReactJS for the frontend, and PostgreSQL database to store the todo items.
In this guide, we will be using odo to create a to-do list application, with the following:
* ReactJS for the frontend
* Java Spring Boot for the backend
* PostgreSQL to store all persistent data
We will be performing following tasks using odo in this guide:
1. Create a project
2. Create an odo component for both the frontend and backend applications
3. Create an Operator backed service for PostgreSQL database
4. Link the backend component with the PostgreSQL service
5. Link the frontend component with the backend component
At the end of the guide, you will be able to list, add and delete to-do items from the web browser.
At the end of the guide, you will be able to list, add and delete todo items from the web browser.
## Prerequisites
* Have the odo binary [installed](./installation.md).
* A [Kubernetes cluster](/docs/getting-started/cluster-setup/kubernetes) set up with a [ingress controller](/docs/getting-started/cluster-setup/kubernetes#installing-an-ingress-controller), [operator lifecycle manager](/docs/getting-started/cluster-setup/kubernetes#installing-the-operator-lifecycle-manager-olm) and (optional) [service binding operator](/docs/getting-started/cluster-setup/kubernetes#installing-the-service-binding-operator).
* Or a [OpenShift cluster](/docs/getting-started/cluster-setup/openshift) set up with the (optional) [service binding operator](/docs/getting-started/cluster-setup/openshift#installing-the-service-binding-operator)
## Clone the quickstart guide
Clone the [quickstart](https://github.com/odo-devfiles/odo-quickstart) repo from GitHub:
```shell
git clone https://github.com/odo-devfiles/odo-quickstart
cd odo-quickstart
```
* A [development Kubernetes](./cluster-setup/kubernetes.md) cluster with [Operator Lifecycle Manager](./cluster-setup/kubernetes#installing-the-operator-lifecycle-manager-olm) setup on it.
* This guide is written for minikube users, hence you will notice the usage of `minikube ip` command to get the IP address of the Kubernetes cluster.
* If you are using a Kubernetes cluster other than minikube, you will need to check with cluster administrator for the cluster IP to be used with `--host` flag.
* If you are using [Code Ready Containers (CRC)](https://github.com/code-ready/crc) or another form of OpenShift cluster, you can skip the part of `odo url create` because odo automatically creates URL for the component using [OpenShift Routes](https://docs.openshift.com/container-platform/latest/networking/routes/route-configuration.html).
* Install the [Crunchy Postgres Operator](https://operatorhub.io/operator/postgresql) on the cluster. Assuming you have admin privileges on the development Kubernetes cluster, you can install it using below command:
```shell
kubectl create -f https://operatorhub.io/install/postgresql.yaml
```
* Have the odo binary [installed](./installation.md) on your system.
## Create a project
We will create a project named `quickstart` on the cluster to keep all quickstart-related activities separate from rest of the cluster:
We will create a project named `quickstart` on the cluster to keep quickstart related activities separate from rest of the cluster:
```shell
odo project create quickstart
```
## Create the frontend Node.JS component
## Clone the code
Our frontend component is a React application that communicates with the backend component.
We will use the catalog command to list all available components and find `nodejs`:
Clone [this git repository](https://github.com/dharmit/odo-quickstart/) and `cd` into it:
```shell
odo catalog list components
git clone https://github.com/dharmit/odo-quickstart
cd odo-quickstart
```
Example output of `odo catalog list components`:
## Create the backend component
First we create a component for the backend application which is a Java Spring Boot based REST API. It will help us list, insert and delete todos from the database. Execute below steps:
```shell
Odo Devfile Components:
NAME DESCRIPTION REGISTRY
nodejs Stack with Node.js 14 DefaultDevfileRegistry
nodejs-angular Stack with Angular 12 DefaultDevfileRegistry
nodejs-nextjs Stack with Next.js 11 DefaultDevfileRegistry
nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry
...
```
Pick `nodejs` to create the frontend component:
```shell
cd frontend
odo create nodejs frontend
```
Create a URL in order to access the component in the browser:
```shell
odo url create --port 3000 --host <CLUSTER-HOSTNAME>
```
**Minikube users:** Use `minikube ip` to find out the hostname and then use `<MINIKUBE-HOSTNAME>.nip.io` for `--host`.
Push the component to the cluster:
```shell
odo push
```
The URL will be listed in the `odo push` output, or can be found in `odo url list`.
Browse the site and try it out! Note that you will not be able to add, remove or list the to-dos yet, as we have not linked the frontend and the backend components yet.
## Create the backend Java component
The backend application is a Java Spring Boot based REST API which will list, insert and delete to-dos from the database.
Find `java-springboot` in the catalog:
```shell
odo catalog list components
```
Example output of `odo catalog list components`:
```shell
Odo Devfile Components:
NAME DESCRIPTION REGISTRY
java-quarkus Quarkus with Java DefaultDevfileRegistry
java-springboot Spring Boot® using Java DefaultDevfileRegistry
java-vertx Upstream Vert.x using Java DefaultDevfileRegistry
...
```
Let's create the component below:
```shell
cd ../backend
cd backend
odo create java-springboot backend
odo url create --port 8080 --host <CLUSTER-HOSTNAME>
odo url create --port 8080 --host `minikube ip`.nip.io
odo push
```
Note, you will not be able to access `http://<YOUR-URL>/api/v1/todos` yet until we link the backend component to the database service.
The `minikube ip` command helps get the IP address of the minikube instance. It is required to create a URL accesible from the web browser of the host system on which minikube is running.
## Create the Postgres service
## Create the Postgres database
Use `odo catalog list services` to list all available operators.
By default, [Operator Lifecycle Manager (OLM)](/docs/getting-started/cluster-setup/kubernetes#installing-the-operator-lifecycle-manager-olm) includes no Operators and they must be installed via [Operator Hub](https://operatorhub.io/)
Install the [Postgres Operator](https://operatorhub.io/operator/postgresql) on the cluster:
```shell
kubectl create -f https://operatorhub.io/install/postgresql.yaml
```
Find `postgresql` in the catalog:
In the [prerequisites](#prerequisites) section, we installed Postgres Operator. Before being able to create a service using it, first ensure that the Operator is installed correctly. You should see the Postgres Operator like in below output. Note that you might see more Operators in the output if there are other Operators installed on your cluster:
```shell
odo catalog list services
```
Example output of `odo catalog list services`:
```shell
$ odo catalog list services
Services available through Operators
NAME CRDs
postgresoperator.v5.0.3 PostgresCluster
```
If you don't see the PostgreSQL Operator listed yet, it may still be installing. Check out our [Operator troubleshooting guide](/docs/getting-started/cluster-setup/kubernetes#checking-to-see-if-an-operator-has-been-installed) for more information.
If you don't see the Postgres Operator here, it might be still installing. Take a look at what you see in the `PHASE` column in below output:
```shell
kubectl get csv
```
```shell
$ kubectl get csv
NAME DISPLAY VERSION REPLACES PHASE
postgresoperator.v5.0.3 Crunchy Postgres for Kubernetes 5.0.3 postgresoperator.v5.0.2 Succeeded
```
If the `PHASE` is something other than `Succeeded`, you won't see it in `odo catalog list services` output, and you won't be able to create a working Operator backed service out of it either.
Now create the service using:
[//]: # (This needs to fixed in the future and a parameter-based command added rather than a .yaml file)
[//]: # (Right now this is blocked on: https://github.com/redhat-developer/odo/issues/5215)
Create the service usng the provided `postgrescluster.yaml` file from [CrunchyData's Postgres guide](https://access.crunchydata.com/documentation/postgres-operator/5.0.0/tutorial/create-cluster/):
```sh
odo service create --from-file ../postgrescluster.yaml
```
Example output:
```sh
$ odo service create --from-file ../postgrescluster.yaml
Successfully added service to the configuration; do 'odo push' to create service on the cluster
````
The service from `postgrescluster.yaml` should now be added to your `devfile.yaml`, do a push to create the database on the cluster:
The `postgrescluster.yaml` file in the repository contains configuration that should help bring up a Postgres database. Do a push to create the database on the cluster:
```shell
odo push
```
## Link the backend component and the service
## Link the backend component and the database
Now we will link the the backend component (Java API) to the service (Postgres).
First, see if the service has been deployed:
Next, we need to link the backend component with the database. Let's get the information about the database service first:
```shell
odo service list
```
Example output:
```shell
$ odo service list
NAME MANAGED BY ODO STATE AGE
PostgresCluster/hippo Yes (backend) Pushed 3m42s
```
Link the backend component with the above service:
Now, let's link the backend component with the above service using:
```shell
odo link PostgresCluster/hippo
odo push
```
Now, get the URL (`odo url list`) for the backend component, append `api/v1/todos` to it and open it on your browser:
```shell
odo url list
```
Push the changes and `odo` will link the service to the component:
Example output:
```shell
$ odo url list
Found the following URLs for component backend
NAME STATE URL PORT SECURE KIND
8080-tcp Pushed http://8080-tcp.192.168.39.117.nip.io 8080 false ingress
```
In this case, the URL to load in browser would be `http://8080-tcp.192.168.39.117.nip.io/api/v1/todos`. Note that the URL would be different in your case depending on what the minikube VM's IP is. When you load the URL in the browser, you should see an empty list:
```shell
[]
```
## Create the frontend component
Our frontend component is a React application that communicates with the backend component. Create the frontend component:
```sh
cd ../frontend
odo create nodejs frontend
odo url create --port 3000 --host `minikube ip`.nip.io
odo push
```
Now your service is linked to the backend component!
Open the URL for the component in the browser, but note that you won't be able to add, remove or list the todos yet because we haven't linked the frontend and the backend components:
```shell
odo url list
```
## Link the frontend and backend components
To link the frontend component to backend:
For our last step, we will now link the backend Java component (which also uses the Postgres service) and the frontend Node.JS component.
This will allow both to communicate with each other in order to store persistent data.
Change to the `frontend` component directory and link it to the backend:
```shell
cd ../frontend
odo link backend
```
Push the changes:
```shell
odo push
```
We're done! Now it's time to test your new multi-component and service application.
## Testing your application
### Frontend Node.JS component
Find out what URL is being used by the frontend:
```shell
odo url list
Found the following URLs for component frontend
NAME STATE URL PORT SECURE KIND
http-3000 Pushed http://<URL-OUTPUT> 3000 false ingress
```
Visit the link and type in some to-dos!
### Backend Java component
Let's see if each to-do is being stored in the backend api and database.
Find out what URL is being used by the backend:
```shell
odo url list
Found the following URLs for component backend
NAME STATE URL PORT SECURE KIND
8080-tcp Pushed http://<URL-OUTPUT> 8080 false ingress
```
When you `curl` or view the URL on your browser, you'll now see the list of your to-dos:
```yaml
curl http://<URL-OUTPUT>/api/v1/todos
[{"id":1,"description":"hello"},{"id":2,"description":"world"}]
```
## Further reading
Want to learn what else `odo` can do? Check out the [Tutorials](/docs/intro) on the sidebar.
Now reload the URL of frontend component and try adding and removing some todo items. The list of items appears by default on the same page just below the input box that reads `Add a new task`.

View File

@@ -5,47 +5,38 @@ title: Introduction
### What is odo?
`odo` is a fast, iterative and straightforward CLI tool for developers who write, build, and deploy applications on Kubernetes.
odo is a fast, iterative and straightforward CLI tool for developers who write, build, and deploy applications on Kubernetes.
We abstract the complex concepts of Kubernetes so you can focus on one thing: `code`.
odo abstracts the complex Kubernetes terminology so that an application developer can focus on writing code in their favourite framework without having to learn Kubernetes.
Choose your favourite framework and `odo` will deploy it *fast* and *often* to your container orchestrator cluster.
`odo` is focused on [inner loop](./intro#what-is-inner-loop-and-outer-loop) development as well as tooling that would helps users transition to the [outer loop](./intro#what-is-inner-loop-and-outer-loop).
odo is focused on [inner loop](./intro#what-is-inner-loop-and-outer-loop) development with some tooling that would help users transition to the [outer loop](./intro#what-is-inner-loop-and-outer-loop).
Brendan Burns, one of the co-founders of Kubernetes, said in the [book Kubernetes Patterns](https://www.redhat.com/cms/managed-files/cm-oreilly-kubernetes-patterns-ebook-f19824-201910-en.pdf):
> It (Kubernetes) is the foundation on which applications will be built, and it provides a large library of APIs and tools for building these applications, but it does little to provide the application or container developer with any hints or guidance for how these various pieces can be combined into a complete, reliable system that satisfies their business needs and goals.
> It (Kubernetes) is the foundation on which applications will be built, and it provides a large library of APIs and tools for building these applications, but it does little to provide the application architect or developer with any hints or guidance for how these various pieces can be combined into a complete, reliable system that satisfies their business needs and goals.
`odo` satisfies that need by making Kubernetes development *super easy* for application developers and cloud engineer.
odo makes Kubernetes easy for application architects and developers.
### What is "inner loop" and "outer loop"?
The **inner loop** consists of local coding, building, running, and testing the application -- all activities that you, as a developer, can control.
The **outer loop** consists of the larger team processes that your code flows through on its way to the cluster: code reviews, integration tests, security and compliance, and so on.
The inner loop could happen mostly on your laptop. The outer loop happens on shared servers and runs in containers, and is often automated with continuous integration/continuous delivery (CI/CD) pipelines.
Usually, a code commit to source control is the transition point between the inner and outer loops.
The inner loop consists of local coding, building, running, and testing the applicationall activities that you, as a developer, can control. The outer loop consists of the larger team processes that your code flows through on its way to the cluster: code reviews, integration tests, security and compliance, and so on. The inner loop could happen mostly on your laptop. The outer loop happens on shared servers and runs in containers, and is often automated with continuous integration/continuous delivery (CI/CD) pipelines. Usually, a code commit to source control is the transition point between the inner and outer loops.
*([Source](https://developers.redhat.com/blog/2020/06/16/enterprise-kubernetes-development-with-odo-the-cli-tool-for-developers#improving_the_developer_workflow))*
### Why should I use `odo`?
### Who should use odo?
You should use `odo` if:
* You love frameworks such as Node.js, Spring Boot or dotNet
* Your application is intended to run in a Kubernetes-like infrastructure
* You don't want to spend time fighting with DevOps and learning Kubernetes in order to deploy to your enterprise infrastructure
You should use odo if:
* you are developing applications using Node.js, Spring Boot, or similar framework
* your applications are intended to run in Kubernetes and your Ops team will help deploy them
* you do not want to spend time learning about Kubernetes, and prefer to focus on develop applications using your favourite framework
If you are an application developer wishing to deploy to Kubernetes easily, then `odo` is for you.
Basically, if you are an application developer, you should use odo to run your application on a Kubernetes cluster.
### How is odo different from `kubectl` and `oc`?
Both [`kubectl`](https://github.com/kubernetes/kubectl) and [`oc`](https://github.com/openshift/oc/) require deep understanding of Kubernetes and OpenShift concepts.
Both [`kubectl`](https://github.com/kubernetes/kubectl) and [`oc`](https://github.com/openshift/oc/) require deep understanding of Kubernetes concepts.
`odo` is different as it focuses on application developers and cloud engineers. Both `kubectl` and `oc` are DevOps oriented tools and help in deploying applications to and maintaining a Kubernetes cluster provided you know Kubernetes well.
odo is different from these tools in that it is focused on application developers and architects. Both `kubectl` and `oc` are Ops oriented tools and help in deploying applications to and maintaining a Kubernetes cluster provided you know Kubernetes well.
`odo` is not meant to:
* Maintain a production Kubernetes cluster
* Perform sysadmin tasks against a Kubernetes cluster
You should not use odo:
* to maintain a production Kubernetes cluster
* to perform administration tasks against a Kubernetes cluster

View File

@@ -11,23 +11,12 @@ The images can be used for devfiles on IBM Z & Power
|Language | Devfile Name | Description | Image Source | Supported Platform |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| dotnet | dotnet60 | Stack with .NET 6.0 | registry.access.redhat.com/ubi8/dotnet-60:6.0 | s390x |
| Go | go | Stack with the latest Go version | golang:latest | s390x |
| Java | java-maven | Upstream Maven and OpenJDK 11 | registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le |
| Java | java-openliberty | Open Liberty microservice in Java | registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le |
| Java | java-openliberty-gradle | Java application Gradle-built stack using the Open Liberty runtime | openliberty/application-stack:gradle-0.2 | s390x |
| Java | java-quarkus | Upstream Quarkus with Java+GraalVM | registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8 | s390x, ppc64le|
| Java | java-springboot | Spring Boot® using Java| registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le|
| Vert.x Java| java-vertx | Upstream Vert.x using Java | registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le|
| Java | java-wildfly-bootable-jar | Java stack with WildFly in bootable Jar mode, OpenJDK 11 and Maven 3.5 | registry.access.redhat.com/ubi8/openjdk-11 | s390x |
| Node.JS | nodejs | Stack with NodeJS 12 | registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8 | s390x, ppc64le|
| Node.JS | nodejs-angular | Stack with Angular 12 | node:lts-slim | s390x |
| Node.JS | nodejs-nextjs | Stack with Next.js 11 | node:lts-slim | s390x |
| Node.JS | nodejs-nuxtjs | Stack with Nuxt.js 2 | node:lts | s390x |
| Node.JS | nodejs-react | Stack with React 17 | node:lts-slim | s390x |
| Node.JS | nodejs-svelte | Stack with Svelte 3 | node:lts-slim | s390x |
| Node.JS | nodejs-vue | Stack with Vue 3 | node:lts-slim | s390x |
| PHP | php-laravel | Stack with Laravel 8 | composer:latest | s390x |
| Python| python | Python Stack with Python 3.7 | registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8 | s390x, ppc64le|
| Django| python-django| Python3.7 with Django| registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8| s390x, ppc64le|

View File

@@ -41,11 +41,11 @@ Note: Since operator [Dev4Devs PostgreSQL Operator](https://operatorhub.io/opera
1. Create custom CatalogSource
```shell
oc apply -f https://raw.githubusercontent.com/redhat-developer/odo/main/docs/website/manifests/catalog-source-$(uname -m).yaml
oc apply -f https://raw.githubusercontent.com/redhat-developer/odo/main/website/manifests/catalog-source-$(uname -m).yaml
```
2. Install PostgreSQL Operator from custom CatalogSource
```shell
oc create -f https://raw.githubusercontent.com/redhat-developer/odo/main/docs/website/manifests/postgresql-operator-dev4devs-com-IBM-Z-P.yaml
oc create -f https://raw.githubusercontent.com/redhat-developer/odo/main/website/manifests/postgresql-operator-dev4devs-com-IBM-Z-P.yaml
```
</details>
@@ -91,6 +91,29 @@ In this example we will use odo to manage a sample [Java JPA MicroService applic
```shell
odo push --show-log
```
**Troubleshooting**:
The Open Liberty image used by this application is relatively large(~850 MB), and depending on your internet connection, it might fail to download within the BuildTimeout set by odo; default timeout is 300 seconds.
```shell
$ odo push
Validation
✓ Validating the devfile [45508ns]
Updating services
✓ Services and Links are in sync with the cluster, no changes are required
Creating Kubernetes resources for component mysboproj
✗ Waiting for component to start [5m]
✗ Failed to start component with name "mysboproj". Error: Failed to create the component: error while waiting for deployment rollout: timeout while waiting for mysboproj-app deployment roll out
```
In case this step fails due to a timeout, consider increasing the Build Timeout:
```shell
odo preference set BuildTimeout 600
```
Deploy the application to the cluster again:
```shell
odo push --show-log -f
```
5. The application is now deployed to the cluster - you can view the status of the cluster, and the application test results by streaming the cluster logs of the component that we pushed to the cluster in the previous step.
```shell
odo log --follow

View File

@@ -48,6 +48,11 @@ module.exports = {
href: 'https://github.com/redhat-developer/odo',
label: 'GitHub',
position: 'right',
},
{
type: 'docsVersionDropdown',
position: 'right',
dropdownActiveClassDisabled: true,
},
],
},
@@ -117,16 +122,29 @@ module.exports = {
'@docusaurus/preset-classic',
{
docs: {
lastVersion: 'current',
versions: {
current: {
label: '2.5.0',
badge: true,
},
'3.0.0': {
label: '3.0.0 (Alpha 1) 🚧',
path: '3.0.0',
badge: true,
banner: 'unreleased',
},
},
sidebarPath: require.resolve('./sidebars.js'),
// Please change this to your repo.
editUrl:
'https://github.com/redhat-developer/odo/edit/main/website/',
'https://github.com/redhat-developer/odo/edit/main/website/',
},
blog: {
showReadingTime: true,
// Please change this to your repo.
editUrl:
'https://github.com/redhat-developer/odo/edit/main/website/blog/',
'https://github.com/redhat-developer/odo/edit/main/website/blog/',
blogSidebarTitle: 'All posts',
blogSidebarCount: 'ALL',
postsPerPage: 5,

0
docs/website/index.md Normal file
View File

File diff suppressed because it is too large Load Diff

View File

@@ -14,9 +14,9 @@
"write-heading-ids": "docusaurus write-heading-ids"
},
"dependencies": {
"@docusaurus/core": "^2.0.0-beta.ff31de0ff",
"@docusaurus/preset-classic": "^2.0.0-beta.ff31de0ff",
"@docusaurus/theme-search-algolia": "^2.0.0-beta.ff31de0ff",
"@docusaurus/core": "^2.0.0-beta.16",
"@docusaurus/preset-classic": "^2.0.0-beta.16",
"@docusaurus/theme-search-algolia": "^2.0.0-beta.16",
"@mdx-js/react": "^1.6.21",
"@segment/snippet": "^4.15.3",
"@svgr/webpack": "^5.5.0",

View File

@@ -0,0 +1,4 @@
{
"label": "Architecture",
"position": 6
}

View File

@@ -0,0 +1,61 @@
---
title: Secure Registry
sidebar_position: 5
---
**What is a secure devfile registry?**
A secure devfile registry is a devfile registry that a user can only access using credentials.
**Where to host secure devfile registry?**
A user can host a secure devfile registry on a private GitHub repository or an enterprise GitHub repository.
## Adding a secure devfile registry on a GitHub repository
1. [Create new private or enterprise GitHub repository](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-new-repository) to host the secure devfile registry and push the devfile registry to the created repository. The sample GitHub-hosted devfile registry can be found [here](https://github.com/odo-devfiles/registry/).
2. [Create a personal access token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) to access the secure devfile registry with `repo` as token scope.
3. Keyring setup: There is no specific keyring setup for secure devfile registry, you only need to ensure the keyring is working properly on your system.
If you hit issues with keyring please follow the below instructions to troubleshoot with respect to the corresponding platforms.
* [Mac keychain](https://support.apple.com/en-ca/guide/keychain-access/welcome/mac)
* [GNOME keyring setup on RedHat Enterprise Linux](https://nurdletech.com/linux-notes/agents/keyring.html)
* [GNOME keyring setup on Ubuntu Linux](https://howtoinstall.co/en/ubuntu/xenial/gnome-keyring)
* [Linux GNOME keyring](https://help.gnome.org/users/seahorse/stable/index.html.en)
* [Windows credential manager](https://support.microsoft.com/en-ca/help/4026814/windows-accessing-credential-manager)
4. Add a secure devfile registry to odo.
```shell
odo registry add <registry name> <registry URL> --token <token>
```
* registry name: user-defined devfile registry name.
* registry URL: the URL of GitHub repository that you create on step 1.
* token: the personal access token that you created on step 2.
## Steps for setting up a secure starter project on a GitHub repository
1. [Create a new private or enterprise GitHub repository](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-new-repository) and push the starter project to the created repository. The sample GitHub-hosted starter project can be found [here](https://github.com/odo-devfiles/nodejs-ex).
Ensure the `starterProjects` section in the corresponding devfile of your secure devfile registry links to the secure starter project, for example:
```shell
starterProjects:
- name: nodejs-starter
git:
remotes:
origin: "<secure starter project link>"
```
2. [Create a personal access token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) to access the secure devfile registry with `repo` as token scope.
3. Create a devfile component from the secure devfile registry and download the secure starter project.
```shell
odo create nodejs --registry <registry name> --starter --starter-token <starter project token>
```
* registry name: user-defined devfile registry name.
* starter project token: the personal access token that you create on step 2.
**Note:** GitHub only supports user-scoped personal access tokens. If the repository that hosts the secure registry and the repository that hosts the secure starter project are created under the same GitHub user, then the token can be used for both downloading the devfile and starter project. For that case you don't need to explicitly pass in the flag `--starter-token <starter project token>`, odo can automatically use one token to download both devfile and starter project.

View File

@@ -0,0 +1,4 @@
{
"label": "Command Reference",
"position": 4
}

View File

@@ -0,0 +1,26 @@
---
title: odo build-images
sidebar_position: 1
---
odo can build container images based on Dockerfiles, and push these images to their registries.
When running the command `odo build-images`, odo searches for all components in the `devfile.yaml` with the `image` type, for example:
```
components:
- image:
imageName: quay.io/myusername/myimage
dockerfile:
uri: ./Dockerfile
buildContext: ${PROJECTS_ROOT}
name: component-built-from-dockerfile
```
The `uri` field indicates the relative path of the Dockerfile to use, relative to the directory containing the `devfile.yaml`. The devfile specification indicates that `uri` could also be an HTTP URL, but this case is not supported by odo yet.
The `buildContext` indicates the directory used as build context. The default value is `${PROJECTS_ROOT}`.
For each image component, odo executes either `podman` or `docker` (the first one found, in this order), to build the image with the specified Dockerfile, build context and arguments.
If the `--push` flag is passed to the command, the images are be pushed to their registries after they are built.

View File

@@ -0,0 +1,172 @@
---
title: odo catalog
sidebar_position: 2
---
odo uses different *catalogs* to deploy *components* and *services*.
## Components
odo uses the portable *devfile* format to describe the components. It can connect to various devfile registries to download devfiles for different languages and frameworks. See [`odo registry`](/docs/command-reference/registry) for more information.
### Listing components
You can list all the *devfiles* available on the different registries with the command:
```
odo catalog list components
```
Example:
```
$ odo catalog list components
Odo Devfile Components:
NAME DESCRIPTION REGISTRY
go Stack with the latest Go version DefaultDevfileRegistry
java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry
nodejs Stack with Node.js 14 DefaultDevfileRegistry
php-laravel Stack with Laravel 8 DefaultDevfileRegistry
python Python Stack with Python 3.7 DefaultDevfileRegistry
[...]
```
### Getting information about a component
You can get more information about a specific component with the command:
```
odo catalog describe component
```
Example:
```
$ odo catalog describe component nodejs
* Registry: DefaultDevfileRegistry
Starter Projects:
---
name: nodejs-starter
attributes: {}
description: ""
subdir: ""
projectsource:
sourcetype: ""
git:
gitlikeprojectsource:
commonprojectsource: {}
checkoutfrom: null
remotes:
origin: https://github.com/odo-devfiles/nodejs-ex.git
zip: null
custom: null
```
*Registry* is the registry from which the devfile is retrieved.
*Starter projects* are sample projects in the same language and framework of the devfile, that can help you start a new project. See [`odo create`](/docs/command-reference/create) for more information on creating a project from a starter project.
## Services
odo can deploy *services* with the help of *operators*.
Only operators deployed with the help of the [*Operator Lifecycle Manager*](https://olm.operatorframework.io/) are supported by odo. See [Installing the Operator Lifecycle Manager (OLM)](/docs/getting-started/cluster-setup/kubernetes#installing-the-operator-lifecycle-manager-olm) for more information.
### Listing services
You can get the list of available operators and their associated services with the command:
```
odo catalog list services
```
Example:
```
$ odo catalog list services
Services available through Operators
NAME CRDs
postgresql-operator.v0.1.1 Backup, Database
redis-operator.v0.8.0 RedisCluster, Redis
```
In this example, you can see that two operators are installed in the cluster. The `postgresql-operator.v0.1.1` operator can deploy services related to PostgreSQL: `Backup` and `Database`. The `redis-operator.v0.8.0` operator can deploy services related to Redis: `RedisCluster` and `Redis`.
> Note: To get a list of all the available operators, odo fetches the `ClusterServiceVersion` (`CSV`) resources of the current namespace that are in a *Succeeded* phase. For operators that support cluster-wide access, when a new namespace is created, these resources are automatically added to it, but it may take some time before they are in the *Succeeded* phase, and odo may return an empty list until the resources are ready.
### Searching services
You can search for a specific service by a keyword with the command:
```
odo catalog search service
```
Example:
```
$ odo catalog search service postgre
Services available through Operators
NAME CRDs
postgresql-operator.v0.1.1 Backup, Database
```
You may see a similar list that contains only the relevant operators, whose name contains the searched keyword.
### Getting information about a service
You can get more information about a specific service with the command:
```
odo catalog describe service
```
Example:
```
$ odo catalog describe service postgresql-operator.v0.1.1/Database
KIND: Database
VERSION: v1alpha1
DESCRIPTION:
Database is the Schema for the the Database Database API
FIELDS:
awsAccessKeyId (string)
AWS S3 accessKey/token ID
Key ID of AWS S3 storage. Default Value: nil Required to create the Secret
with the data to allow send the backup files to AWS S3 storage.
[...]
```
A service is represented in the cluster by a `CustomResourceDefinition` (commonly named `CRD`). This command will display the details about this CRD such as `kind`, `version`, and the list of fields available to define an instance of this custom resource.
The list of fields is extracted from the *OpenAPI schema* included in the `CRD`. This information is optional in a `CRD`, and if it is not present, it is extracted from the `ClusterServiceVersion` (`CSV`) representing the service instead.
It is also possible to request description of operator backed service without providing crd type information. Let us say you want to describe redis operator on cluster without CRD, you can do
```shell
odo catalog describe service redis-operator.v0.8.0
NAME: redis-operator.v0.8.0
DESCRIPTION:
A Golang based redis operator that will make/oversee Redis
standalone/cluster mode setup on top of the Kubernetes. It can create a
redis cluster setup with best practices on Cloud as well as the Bare metal
environment. Also, it provides an in-built monitoring capability using
... (cut short for beverity)
Logging Operator is licensed under [Apache License, Version
2.0](https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/LICENSE)
CRDs:
NAME DESCRIPTION
RedisCluster Redis Cluster
Redis Redis
```

View File

@@ -0,0 +1,92 @@
---
title: odo create
sidebar_position: 3
---
odo uses the [_devfile_](https://devfile.io) to store the configuration of and describe the resources like storage, services, etc. of a component. The _odo create_ command allows you to generate this file.
## Creating a component
To create a _devfile_ for an existing project, you can execute `odo create` with the name and type of your component (for example, nodejs or go):
```
odo create nodejs mynodejs
```
Here `nodejs` is the type of the component and `mynodejs` is the name of the component odo creates for you.
> Note: for a list of all the supported component types, run `odo catalog list components`.
If your source code exists outside the current directory, the `--context` flag can be used to specify the path. For example, if the source for the nodejs component was in a folder called `node-backend` relative to the current working directory, you could run:
```
odo create nodejs mynodejs --context ./node-backend
```
Both relative and absolute paths are supported.
To specify the project or app of where your component will be deployed, you can use the `--project` and `--app` flags.
For example, to create a component that is a part of the `myapp` app inside the `backend` project:
```
odo create nodejs --app myapp --project backend
```
> Note: if these are not specified, they will default to the active app and project
## Starter projects
If you do not have existing source code but wish to get up and running quickly to experiment with devfiles and components, you could use the starter projects to get started. To use a starter project, include the `--starter` flag in your `odo create` command.
To get a list of available starter projects for a component type, you can use the `odo catalog describe component` command. For example, to get all available starter projects for the nodejs component type, run:
```
odo catalog describe component nodejs
```
Then specify the desired project with the `--starter` flag:
```
odo create nodejs --starter nodejs-starter
```
This will download the example template corresponding to the chosen component type (in the example above, `nodejs`) in your current directory (or the path provided with the `--context` flag).
If a starter project has its own devfile, then this devfile will be preserved.
## Using an existing devfile
If you want to create a new component from an existing devfile, you can do so by specifying the path to the devfile with the `--devfile` flag.
For example, the following command will create a component called `mynodejs`, based on the devfile from GitHub:
```
odo create mynodejs --devfile https://raw.githubusercontent.com/odo-devfiles/registry/master/devfiles/nodejs/devfile.yaml
```
## Interactive creation
The `odo create` command can also be run interactively. Execute `odo create`, which will guide you through a list of steps to create a component:
```sh
odo create
? Which devfile component type do you wish to create go
? What do you wish to name the new devfile component go-api
? What project do you want the devfile component to be created in default
Devfile Object Validation
✓ Checking devfile existence [164258ns]
✓ Creating a devfile component from registry: DefaultDevfileRegistry [246051ns]
Validation
✓ Validating if devfile name is correct [92255ns]
? Do you want to download a starter project Yes
Starter Project
✓ Downloading starter project go-starter from https://github.com/devfile-samples/devfile-stack-go.git [429ms]
Please use `odo push` command to create the component with source deployed
```
You will be prompted to choose the component type, name and the project for the component. You can also choose whether or not to download a starter project. Once finished, a new `devfile.yaml` file should be created in the working directory.
To deploy these resources to your cluster, run `odo push`.

View File

@@ -0,0 +1,40 @@
---
title: odo delete
sidebar_position: 4
---
`odo delete` command is useful for deleting resources that are managed by odo.
## Deleting a component
To delete a _devfile_ component, you can execute `odo delete`.
```shell
odo delete
```
If the component is pushed to the cluster, running the above command will delete the component from the cluster, and it's dependant storage, url, secrets, and other resources.
If it is not pushed, the command would exit with an error stating that it could not find the resources on the cluster.
Use `-f` or `--force` flag to avoid the confirmation questions.
## Un-deploying Devfile Kubernetes components
To undeploy the Devfile Kubernetes components deployed with `odo deploy` from the cluster, you can execute the `odo delete` command with `--deploy` flag:
```shell
odo delete --deploy
```
Use `-f` or `--force` flag to avoid the confirmation questions.
## Delete Everything
To delete a _devfile_ component, the Devfile Kubernetes component(deployed via `odo deploy`), Devfile, and the local configuration, you can execute the `odo delete` command with `--all` flag:
```shell
odo delete --all
```
## Available Flags
* `-f`, `--force` - Use this flag to avoid the confirmation questions.
* `-w`, `--wait` - Use this flag to wait for component deletion, and it's dependant; this does not work with the un-deployment.
Check the [documentation on flags](flags.md) to see more flags available.

View File

@@ -0,0 +1,67 @@
---
title: odo deploy
sidebar_position: 5
---
odo can be used to deploy components in a similar manner they would be deployed by a CI/CD system,
by first building the images of the containers to deploy, then by deploying the Kubernetes resources
necessary to deploy the components.
When running the command `odo deploy`, odo searches for the default command of kind `deploy` in the devfile, and executes this command.
The kind `deploy` is supported by the devfile format starting from version 2.2.0.
The `deploy` command is typically a *composite* command, composed of several *apply* commands:
- a command referencing an `image` component that, when applied, will build the image of the container to deploy, and push it to its registry,
- a command referencing a [`kubernetes` component](https://devfile.io/docs/devfile/2.2.0/user-guide/adding-kubernetes-component-to-a-devfile.html) that, when applied, will create a Kubernetes resource in the cluster.
With the following example `devfile.yaml` file, a container image will be built by using the `Dockerfile` present in the directory,
the image will be pushed to its registry and a Kubernetes Deployment will be created in the cluster, using this freshly built image.
```
schemaVersion: 2.2.0
[...]
variables:
CONTAINER_IMAGE: quay.io/phmartin/myimage
commands:
- id: build-image
apply:
component: outerloop-build
- id: deployk8s
apply:
component: outerloop-deploy
- id: deploy
composite:
commands:
- build-image
- deployk8s
group:
kind: deploy
isDefault: true
components:
- name: outerloop-build
image:
imageName: "{{CONTAINER_IMAGE}}"
dockerfile:
uri: ./Dockerfile
buildContext: ${PROJECTS_ROOT}
- name: outerloop-deploy
kubernetes:
inlined: |
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-component
spec:
replicas: 1
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: main
image: {{CONTAINER_IMAGE}}
```

View File

@@ -0,0 +1,17 @@
---
title: Common Flags
sidebar_position: 50
---
### Available Flags
Following are the flags commonly available with almost every odo command.
* `--context` - Use this flag to set the context directory where the component is defined.
* `--project` - Use this flag to set the project for the component; defaults to the project defined in the local configuration; if none is available, then current project on the cluster
* `--app` - Use this flag to set the application of the component; defaults to the application defined in the local configuration; if none is available, then _app_
* `--kubeconfig` - Use this flag to set path to the kubeconfig if not using the default configuration
* `--show-log` - Use this flag to see the logs
* `-f`, `--force` - Use this flag to tell the command not to prompt user for confirmation
* `-v`, `--v` - Use this flag to set the verbosity level. See (Logging in odo)[https://github.com/redhat-developer/odo/wiki/Logging-in-odo] for more information.
* `-h`, `--help` - Use this flag to get help on a command
**Note:** Some flags might not be available in some commands, run the command with `--help` to get a list of all the available flags.

View File

@@ -0,0 +1,44 @@
---
title: JSON Output
sidebar_position: 100
---
The `odo` commands that output some content generally accept a `-o json` flag to output this content in a JSON format, suitable for other programs to parse this output more easily.
The output structure is similar to Kubernetes resources, with `kind`, `apiVersion`, `metadata` ,`spec` and `status` fields.
List commands return a `List` resource, containing an `items` (or similar) field listing the items of the list, each item being also similar to Kubernetes resources.
Delete commands return a `Status` resource; see the [Status Kubernetes resource](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/status/).
Other commands return a resource associated with the command (`Application`, `Storage`', `URL`, etc).
The exhaustive list of commands accepting the `-o json` flag is currently:
| commands | Kind (version) | Kind (version) of list items | Complete content? |
|--------------------------------|-----------------------------------------|--------------------------------------------------------------|---------------------------|
| odo application describe | Application (odo.dev/v1alpha1) | *n/a* |no |
| odo application list | List (odo.dev/v1alpha1) | Application (odo.dev/v1alpha1) | ? |
| odo catalog list components | List (odo.dev/v1alpha1) | *missing* | yes |
| odo catalog list services | List (odo.dev/v1alpha1) | ClusterServiceVersion (operators.coreos.com/v1alpha1) | ? |
| odo catalog describe component | *missing* | *n/a* | yes |
| odo catalog describe service | CRDDescription (odo.dev/v1alpha1) | *n/a* | yes |
| odo component create | Component (odo.dev/v1alpha1) | *n/a* | yes |
| odo component describe | Component (odo.dev/v1alpha1) | *n/a* | yes |
| odo component list | List (odo.dev/v1alpha1) | Component (odo.dev/v1alpha1) | yes |
| odo config view | DevfileConfiguration (odo.dev/v1alpha1) | *n/a* | yes |
| odo debug info | OdoDebugInfo (odo.dev/v1alpha1) | *n/a* | yes |
| odo env view | EnvInfo (odo.dev/v1alpha1) | *n/a* | yes |
| odo preference view | PreferenceList (odo.dev/v1alpha1) | *n/a* | yes |
| odo project create | Project (odo.dev/v1alpha1) | *n/a* | yes |
| odo project delete | Status (v1) | *n/a* | yes |
| odo project get | Project (odo.dev/v1alpha1) | *n/a* | yes |
| odo project list | List (odo.dev/v1alpha1) | Project (odo.dev/v1alpha1) | yes |
| odo registry list | List (odo.dev/v1alpha1) | *missing* | yes |
| odo service create | Service | *n/a* | yes |
| odo service describe | Service | *n/a* | yes |
| odo service list | List (odo.dev/v1alpha1) | Service | yes |
| odo storage create | Storage (odo.dev/v1alpha1) | *n/a* | yes |
| odo storage delete | Status (v1) | *n/a* | yes |
| odo storage list | List (odo.dev/v1alpha1) | Storage (odo.dev/v1alpha1) | yes |
| odo url list | List (odo.dev/v1alpha1) | URL (odo.dev/v1alpha1) | yes |

View File

@@ -0,0 +1,383 @@
---
title: odo link
sidebar_position: 7
---
`odo link` command helps link an odo component to an Operator backed service or another odo component. It does this by using [Service Binding Operator](https://github.com/redhat-developer/service-binding-operator). At the time of writing this, odo makes use of the Service Binding library and not the Operator itself to achieve the desired functionality.
In this document we will cover various options to create link between a component & a service, and a component & another component. The steps in this document are going to be based on the [odo quickstart project](https://github.com/dharmit/odo-quickstart/) that we covered in [Quickstart guide](/docs/getting-started/quickstart). The outputs mentioned in this document are based on commands executed on [minikube cluster](/docs/getting-started/cluster-setup/kubernetes).
This document assumes that you know how to [create components](/docs/command-reference/create) and [services](/docs/command-reference/service). It also assumes that you have cloned the [odo quickstart project](https://github.com/dharmit/odo-quickstart/). Terminology used in this document:
- *quickstart project*: git clone of the odo quickstart project having below directory structure:
```shell
$ tree -L 1
.
├── backend
├── frontend
├── postgrescluster.yaml
├── quickstart.code-workspace
└── README.md
2 directories, 3 files
```
- *backend component*: `backend` directory in above tree structure
- *frontend component*: `frontend` directory in above tree structure
- *Postgres service*: Operator backed service created from *backend component* using the `odo service create --from-file ../postgrescluster.yaml` command.
## Various linking options
odo provides various options to link a component with an Operator backed service or another odo component. All these options (or flags) can be used irrespective of whether you are linking a component to a service or another component.
### Default behaviour
By default, `odo link` creates a directory named `kubernetes/` in your component directory and stores the information (YAML manifests) about services and links in it. When you do `odo push`, odo compares these manifests with the state of the things on the Kubernetes cluster and decides whether it needs to create, modify or destroy resources to match what is specified by the user.
### The `--inlined` flag
If you specified `--inlined` flag to the `odo link` command, odo will store the link information inline in the `devfile.yaml` in the component directory instead of creating a file under `kubernetes/` directory. The behaviour of `--inlined` flag is similar in both the `odo link` and `odo service create` commands. This flag is helpful if you would like everything to be stored in a single `devfile.yaml`. You will have to remember to use `--inlined` flag with each `odo link` and `odo service create` commands that you execute for the component.
### The `--map` flag
At times, you might want to add more binding information to the component than what is available by default. For example, if you are linking the component with a service and would like to bind some information from the service's spec (short for specification), you could use the `--map` flag. Note that odo doesn't do any validation against the spec of the service/component being linked. Using this flag is recommended only if you are comfortable with reading the Kubernetes YAML manifests.
### The `--bind-as-files` flag
For all the linking options discussed so far, odo injects the binding information into the component as environment variables. If you would like to instead mount this information as files, you could use the `--bind-as-files` flag. This will make odo inject the binding information as files into the `/bindings` location within your component's Pod. Comparing with the environment variables paradigm, when you use `--bind-as-files`, the files are named after the keys and the value of these keys is stored as the contents of these files.
## Examples
### Default `odo link`
We will link the backend component with the Postgres service using default `odo link` command. For the backend component, make sure that your component and service are pushed to the cluster:
```shell
$ odo list
APP NAME PROJECT TYPE STATE MANAGED BY ODO
app backend myproject spring Pushed Yes
$ odo service list
NAME MANAGED BY ODO STATE AGE
PostgresCluster/hippo Yes (backend) Pushed 59m41s
```
Now, run `odo link` to link the backend component with the Postgres service:
```shell
odo link PostgresCluster/hippo
```
Example output:
```shell
$ odo link PostgresCluster/hippo
✓ Successfully created link between component "backend" and service "PostgresCluster/hippo"
To apply the link, please use `odo push`
```
And then run `odo push` for the link to actually get created on the Kubernetes cluster.
Upon successful `odo push`, you can notice a few things:
1. When you open the URL for the application deployed by backend component, it shows you a list of todo items in the database. For example, for below `odo url list` output, we will append the path where todos are listed:
```shell
$ odo url list
Found the following URLs for component backend
NAME STATE URL PORT SECURE KIND
8080-tcp Pushed http://8080-tcp.192.168.39.112.nip.io 8080 false ingress
```
The correct path for such URL would be - http://8080-tcp.192.168.39.112.nip.io/api/v1/todos. Note that exact URL would be different for your setup. Also note that there are no todos in the database unless you add some, so the URL might just show an empty JSON object.
2. You can see binding information related to Postgres service injected into the backend component. This binding information is injected, by default, as environment variables. You can check it out using the `odo describe` command from backend component's directory:
```shell
odo describe
```
Example output:
```shell
$ odo describe
Component Name: backend
Type: spring
Environment Variables:
· PROJECTS_ROOT=/projects
· PROJECT_SOURCE=/projects
· DEBUG_PORT=5858
Storage:
· m2 of size 3Gi mounted to /home/user/.m2
URLs:
· http://8080-tcp.192.168.39.112.nip.io exposed via 8080
Linked Services:
· PostgresCluster/hippo
Environment Variables:
· POSTGRESCLUSTER_PGBOUNCER-EMPTY
· POSTGRESCLUSTER_PGBOUNCER.INI
· POSTGRESCLUSTER_ROOT.CRT
· POSTGRESCLUSTER_VERIFIER
· POSTGRESCLUSTER_ID_ECDSA
· POSTGRESCLUSTER_PGBOUNCER-VERIFIER
· POSTGRESCLUSTER_TLS.CRT
· POSTGRESCLUSTER_PGBOUNCER-URI
· POSTGRESCLUSTER_PATRONI.CRT-COMBINED
· POSTGRESCLUSTER_USER
· pgImage
· pgVersion
· POSTGRESCLUSTER_CLUSTERIP
· POSTGRESCLUSTER_HOST
· POSTGRESCLUSTER_PGBACKREST_REPO.CONF
· POSTGRESCLUSTER_PGBOUNCER-USERS.TXT
· POSTGRESCLUSTER_SSH_CONFIG
· POSTGRESCLUSTER_TLS.KEY
· POSTGRESCLUSTER_CONFIG-HASH
· POSTGRESCLUSTER_PASSWORD
· POSTGRESCLUSTER_PATRONI.CA-ROOTS
· POSTGRESCLUSTER_DBNAME
· POSTGRESCLUSTER_PGBOUNCER-PASSWORD
· POSTGRESCLUSTER_SSHD_CONFIG
· POSTGRESCLUSTER_PGBOUNCER-FRONTEND.KEY
· POSTGRESCLUSTER_PGBACKREST_INSTANCE.CONF
· POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CA-ROOTS
· POSTGRESCLUSTER_PGBOUNCER-HOST
· POSTGRESCLUSTER_PORT
· POSTGRESCLUSTER_ROOT.KEY
· POSTGRESCLUSTER_SSH_KNOWN_HOSTS
· POSTGRESCLUSTER_URI
· POSTGRESCLUSTER_PATRONI.YAML
· POSTGRESCLUSTER_DNS.CRT
· POSTGRESCLUSTER_DNS.KEY
· POSTGRESCLUSTER_ID_ECDSA.PUB
· POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CRT
· POSTGRESCLUSTER_PGBOUNCER-PORT
· POSTGRESCLUSTER_CA.CRT
```
Few of these variables are used in the backend component's [`src/main/resources/application.properties` file](https://github.com/dharmit/odo-quickstart/blob/main/backend/src/main/resources/application.properties) so that the Java Springboot application can connect to the Postgres database service.
3. Lastly, odo has created a directory called `kubernetes/` in your backend component's directory which contains below files.
```shell
$ ls kubernetes
odo-service-backend-postgrescluster-hippo.yaml odo-service-hippo.yaml
```
This files contains the information (YAML manifests) about two things:
1. `odo-service-hippo.yaml` - the Postgres service we created using `odo service create --from-file ../postgrescluster.yaml` command.
2. `odo-service-backend-postgrescluster-hippo.yaml` - the link we created using `odo link` command.
### `odo link` with `--inlined`
Using `--inlined` flag with `odo link` command does the exact same thing to our application (that is, injects binding information) as an `odo link` command without the flag does. However, the subtle difference is that in above case we saw two manifest files under `kubernetes/` directory — one for the Postgres service and other for the link between the backend component and this service — but when we pass `--inlined` flag, odo does not create a file under `kubernetes/` directory to store the YAML manifest, but stores it inline in the `devfile.yaml` file.
To see this, let's unlink our component from the Postgres service first:
```shell
odo unlink PostgresCluster/hippo
```
Example output:
```shell
$ odo unlink PostgresCluster/hippo
✓ Successfully unlinked component "backend" from service "PostgresCluster/hippo"
To apply the changes, please use `odo push`
```
To unlink them on the cluster, run `odo push`. Now if you take a look at the `kubernetes/` directory, you'll see only one file in it:
```shell
$ ls kubernetes
odo-service-hippo.yaml
```
Next, let's use the `--inlined` flag to create a link:
```shell
odo link PostgresCluster/hippo --inlined
```
Example output:
```shell
$ odo link PostgresCluster/hippo --inlined
✓ Successfully created link between component "backend" and service "PostgresCluster/hippo"
To apply the link, please use `odo push`
```
Just like the time without `--inlined` flag, you need to do `odo push` for the link to get created on the cluster. But where did odo store the configuration/manifest required to create this link? odo stores this in `devfile.yaml`. You can see an entry like below in this file:
```yaml
kubernetes:
inlined: |
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
creationTimestamp: null
name: backend-postgrescluster-hippo
spec:
application:
group: apps
name: backend-app
resource: deployments
version: v1
bindAsFiles: false
detectBindingResources: true
services:
- group: postgres-operator.crunchydata.com
id: hippo
kind: PostgresCluster
name: hippo
version: v1beta1
status:
secret: ""
name: backend-postgrescluster-hippo
```
Now if you were to do `odo unlink PostgresCluster/hippo`, odo would first remove the link information from the `devfile.yaml` and then a subsequent `odo push` would delete the link from the cluster.
### Custom bindings
`odo link` accepts the flag `--map` which can inject custom binding information into the component. Such binding information will be fetched from the manifest of the resource we are linking to our component. For example, speaking in context of the backend component and Postgres service, we can inject information from the Postgres service's manifest ([`postgrescluster.yaml` file](https://github.com/dharmit/odo-quickstart/blob/main/postgrescluster.yaml)) into the backend component.
Considering the name of your `PostgresCluster` service is `hippo` (check the output of `odo service list` if your PostgresCluster service is named differently), if we wanted to inject the value of `postgresVersion` from that YAML definition into our backend component:
```shell
odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}'
```
Note that, if the name of your Postgres service is different from `hippo`, you will have to specify that in the above command in place `.hippo`. For example, if your `PostgresCluster` service is named as `database`, you would change the link command to as shown below:
```shell
$ odo service list
NAME MANAGED BY ODO STATE AGE
PostgresCluster/database Yes (backend) Pushed 2h5m43s
$ odo link PostgresCluster/hippo --map pgVersion='{{ .database.spec.postgresVersion }}'
```
After a link operation, do `odo push` as usual. Upon successful completion of push operation, you can run below command from your backend component directory to validate if custom mapping got injected properly:
```shell
odo exec -- env | grep pgVersion
```
Example output:
```shell
$ odo exec -- env | grep pgVersion
pgVersion=13
```
Since a user might want to inject more than just one piece of custom binding information, `odo link` accepts multiple key-value pairs of mappings. The only constraint being that these should be specified as `--map <key>=<value>`. For example, if you want to also inject Postgres image information along with the version, you could do:
```shell
odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}'
```
and do `odo push`. The way to validate if both the mappings got injected correctly would be to do:
```shell
odo exec -- env | grep -e "pgVersion\|pgImage"
```
Example output:
```shell
$ odo exec -- env | grep -e "pgVersion\|pgImage"
pgVersion=13
pgImage=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0
```
#### To inline or not?
You can stick to the default behaviour wherein `odo link` will generate a manifest file for the link under `kubernetes/` directory, or you could use `--inlined` flag if you prefer to store everything in a single `devfile.yaml` file. It doesn't matter what you use for this functionality of adding custom mappings.
## Binding as files
Another helpful flag that `odo link` provides is called `--bind-as-files`. When this flag is passed, the binding information is not injected into the component's Pod as environment variables but is mounted as a filesystem. We will see a few examples that will make things clearer.
Ensure that there are no existing links between the backend component and the Postgres service. You could do this by running `odo describe` in the backend component's directory and check if you see something like below in the output:
```shell
Linked Services:
· PostgresCluster/hippo
```
Unlink the service from the component using:
```shell
odo unlink PostgresCluster/hippo
odo push
```
## `--bind-as-files` examples
### With default `odo link`
Default behaviour means odo creating the manifest file under `kubernetes/` directory to store the link information. Link the backend component and Postgres service using:
```shell
odo link PostgresCluster/hippo --bind-as-files
odo push
```
Example `odo describe` output:
```shell
$ odo describe
Component Name: backend
Type: spring
Environment Variables:
· PROJECTS_ROOT=/projects
· PROJECT_SOURCE=/projects
· DEBUG_PORT=5858
· SERVICE_BINDING_ROOT=/bindings
· SERVICE_BINDING_ROOT=/bindings
Storage:
· m2 of size 3Gi mounted to /home/user/.m2
URLs:
· http://8080-tcp.192.168.39.112.nip.io exposed via 8080
Linked Services:
· PostgresCluster/hippo
Files:
· /bindings/backend-postgrescluster-hippo/pgbackrest_instance.conf
· /bindings/backend-postgrescluster-hippo/user
· /bindings/backend-postgrescluster-hippo/ssh_known_hosts
· /bindings/backend-postgrescluster-hippo/clusterIP
· /bindings/backend-postgrescluster-hippo/password
· /bindings/backend-postgrescluster-hippo/patroni.yaml
· /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.crt
· /bindings/backend-postgrescluster-hippo/pgbouncer-host
· /bindings/backend-postgrescluster-hippo/root.key
· /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.key
· /bindings/backend-postgrescluster-hippo/pgbouncer.ini
· /bindings/backend-postgrescluster-hippo/uri
· /bindings/backend-postgrescluster-hippo/config-hash
· /bindings/backend-postgrescluster-hippo/pgbouncer-empty
· /bindings/backend-postgrescluster-hippo/port
· /bindings/backend-postgrescluster-hippo/dns.crt
· /bindings/backend-postgrescluster-hippo/pgbouncer-uri
· /bindings/backend-postgrescluster-hippo/root.crt
· /bindings/backend-postgrescluster-hippo/ssh_config
· /bindings/backend-postgrescluster-hippo/dns.key
· /bindings/backend-postgrescluster-hippo/host
· /bindings/backend-postgrescluster-hippo/patroni.crt-combined
· /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.ca-roots
· /bindings/backend-postgrescluster-hippo/tls.key
· /bindings/backend-postgrescluster-hippo/verifier
· /bindings/backend-postgrescluster-hippo/ca.crt
· /bindings/backend-postgrescluster-hippo/dbname
· /bindings/backend-postgrescluster-hippo/patroni.ca-roots
· /bindings/backend-postgrescluster-hippo/pgbackrest_repo.conf
· /bindings/backend-postgrescluster-hippo/pgbouncer-port
· /bindings/backend-postgrescluster-hippo/pgbouncer-verifier
· /bindings/backend-postgrescluster-hippo/id_ecdsa
· /bindings/backend-postgrescluster-hippo/id_ecdsa.pub
· /bindings/backend-postgrescluster-hippo/pgbouncer-password
· /bindings/backend-postgrescluster-hippo/pgbouncer-users.txt
· /bindings/backend-postgrescluster-hippo/sshd_config
· /bindings/backend-postgrescluster-hippo/tls.crt
```
Everything that was an environment variable in the `key=value` format in the earlier `odo describe` output is now mounted as file. Let's we `cat` the contents of few of these files:
```shell
$ odo exec -- cat /bindings/backend-postgrescluster-hippo/password
q({JC:jn^mm/Bw}eu+j.GX{k
$ odo exec -- cat /bindings/backend-postgrescluster-hippo/user
hippo
$ odo exec -- cat /bindings/backend-postgrescluster-hippo/clusterIP
10.101.78.56
```
### With `--inlined`
The result of using `--bind-as-files` and `--inlined` together is similar to `odo link --inlined`, in that, the manifest of the link gets stored in the `devfile.yaml` instead of being stored in a separate file under `kubernetes/` directory. Other than that, the `odo describe` output would like same as saw in the [above section](#with-default-odo-link).
### Custom bindings
When you pass custom bindings while linking the backend component with the Postgres service, these custom bindings are injected not as environment variables but mounted as files. Consider below example:
```shell
odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' --bind-as-files
odo push
```
These custom bindings got mounted as files instead of being injected as environment variables. The way to validate if that worked would be:
```shell
$ odo exec -- cat /bindings/backend-postgrescluster-hippo/pgVersion
13
$ odo exec -- cat /bindings/backend-postgrescluster-hippo/pgImage
registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0
```

View File

@@ -0,0 +1,87 @@
---
title: odo registry
sidebar_position: 8
---
odo uses the portable *devfile* format to describe the components. odo can connect to various devfile registries to download devfiles for different languages and frameworks.
You can connect to publicly available devfile registries, or you can install your own [Secure Registry](/docs/architecture/secure-registry).
You can use the `odo registry` command to manage the registries used by odo to retrieve devfile information.
## Listing the registries
You can use the following command to list the registries currently contacted by odo:
```
odo registry list
```
For example:
```
$ odo registry list
NAME URL SECURE
DefaultDevfileRegistry https://registry.devfile.io No
```
`DefaultDevfileRegistry` is the default registry used by odo; it is provided by the [devfile.io](https://devfile.io) project.
## Adding a registry
You can use the following command to add a registry:
```
odo registry add
```
For example:
```
$ odo registry add StageRegistry https://registry.stage.devfile.io
New registry successfully added
```
If you are deploying your own Secure Registry, you can specify the personal access token to authenticate to the secure registry with the `--token` flag:
```
$ odo registry add MyRegistry https://myregistry.example.com --token <access_token>
New registry successfully added
```
## Deleting a registry
You can delete a registry with the command:
```
odo registry delete
```
For example:
```
$ odo registry delete StageRegistry
? Are you sure you want to delete registry "StageRegistry" Yes
Successfully deleted registry
```
You can use the `--force` (or `-f`) flag to force the deletion of the registry without confirmation.
## Updating a registry
You can update the URL and/or the personal access token of a registry already registered with the command:
```
odo registry update
```
For example:
```
$ odo registry update MyRegistry https://otherregistry.example.com --token <other_access_token>
? Are you sure you want to update registry "MyRegistry" Yes
Successfully updated registry
```
You can use the `--force` (or `-f`) flag to force the update of the registry without confirmation.

View File

@@ -0,0 +1,252 @@
---
title: odo service
sidebar_position: 9
---
odo can deploy *services* with the help of *operators*.
The list of available operators and services available for installation can be found with the [`odo catalog` command](/docs/command-reference/catalog).
Services are created in the context of a *component*, so you should have run [`odo create`](/docs/command-reference/create) before you deploy services.
The deployment of a service is done in two steps:
1. Define the service and store its definition in the devfile,
2. Deploy the defined service to the cluster, using `odo push`.
## Creating a new service
You can create a new service with the command:
```
odo service create
```
For example, to create an instance of a Redis service named `my-redis-service`, you can run:
```
$ odo catalog list services
Services available through Operators
NAME CRDs
redis-operator.v0.8.0 RedisCluster, Redis
$ odo service create redis-operator.v0.8.0/Redis my-redis-service
Successfully added service to the configuration; do 'odo push' to create service on the cluster
```
This command creates a Kubernetes manifest in the `kubernetes/` directory, containing the definition of the service, and this file is referenced from the `devfile.yaml` file.
```
$ cat kubernetes/odo-service-my-redis-service.yaml
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: Redis
metadata:
name: my-redis-service
spec:
kubernetesConfig:
image: quay.io/opstree/redis:v6.2.5
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 101m
memory: 128Mi
requests:
cpu: 101m
memory: 128Mi
serviceType: ClusterIP
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter:1.0
storage:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
```
$ cat devfile.yaml
[...]
components:
- kubernetes:
uri: kubernetes/odo-service-my-redis-service.yaml
name: my-redis-service
[...]
```
Note that the name of the created instance is optional. If you do not provide a name, it will be the lowercased name of the service. For example, the following command will create an instance of a Redis service named `redis`:
```
$ odo service create redis-operator.v0.8.0/Redis
```
### Inlining the manifest
By default, a new manifest is created in the `kubernetes/` directory, referenced from the `devfile.yaml` file. It is possible to inline the manifest inside the `devfile.yaml` file using the `--inlined` flag:
```
$ odo service create redis-operator.v0.8.0/Redis my-redis-service --inlined
Successfully added service to the configuration; do 'odo push' to create service on the cluster
$ cat devfile.yaml
[...]
components:
- kubernetes:
inlined: |
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: Redis
metadata:
name: my-redis-service
spec:
kubernetesConfig:
image: quay.io/opstree/redis:v6.2.5
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 101m
memory: 128Mi
requests:
cpu: 101m
memory: 128Mi
serviceType: ClusterIP
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter:1.0
storage:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
name: my-redis-service
[...]
```
### Configuring the service
Without specific indication, the service will be created with a default configuration. You can use either command-line arguments or a file to specify your own configuration.
#### Using command-line arguments
You can use the `--parameters` (or `-p`) flag to specify your own configuration.
In the following example, we will configure the Redis service with three parameters:
```
$ odo service create redis-operator.v0.8.0/Redis my-redis-service \
-p kubernetesConfig.image=quay.io/opstree/redis:v6.2.5 \
-p kubernetesConfig.serviceType=ClusterIP \
-p redisExporter.image=quay.io/opstree/redis-exporter:1.0
Successfully added service to the configuration; do 'odo push' to create service on the cluster
$ cat kubernetes/odo-service-my-redis-service.yaml
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: Redis
metadata:
name: my-redis-service
spec:
kubernetesConfig:
image: quay.io/opstree/redis:v6.2.5
serviceType: ClusterIP
redisExporter:
image: quay.io/opstree/redis-exporter:1.0
```
You can obtain the possible parameters for a specific service from the [`odo catalog describe service` command](/docs/command-reference/catalog/#getting-information-about-a-service).
#### Using a file
You can use a YAML manifest to specify your own specification.
In the following example, we will configure the Redis service with three parameters. For this, first create a manifest:
```
$ cat > my-redis.yaml <<EOF
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: Redis
metadata:
name: my-redis-service
spec:
kubernetesConfig:
image: quay.io/opstree/redis:v6.2.5
serviceType: ClusterIP
redisExporter:
image: quay.io/opstree/redis-exporter:1.0
EOF
```
Then create the service from the manifest:
```
$ odo service create --from-file my-redis.yaml
Successfully added service to the configuration; do 'odo push' to create service on the cluster
```
## Deleting a service
You can delete a service with the command:
```
odo service delete
```
For example:
```
$ odo service list
NAME MANAGED BY ODO STATE AGE
Redis/my-redis-service Yes (api) Deleted locally 5m39s
$ odo service delete Redis/my-redis-service
? Are you sure you want to delete Redis/my-redis-service Yes
Service "Redis/my-redis-service" has been successfully deleted; do 'odo push' to delete service from the cluster
```
You can use the `--force` (or `-f`) flag to force the deletion of the service without confirmation.
## Listing services
You can get the list of services created for your component with the command:
```
odo service list
```
For example:
```
$ odo service list
NAME MANAGED BY ODO STATE AGE
Redis/my-redis-service-1 Yes (api) Not pushed
Redis/my-redis-service-2 Yes (api) Pushed 52s
Redis/my-redis-service-3 Yes (api) Deleted locally 1m22s
```
For each service, `STATE` indicates if the service has been pushed to the cluster using `odo push`, or if the service is still running on the cluster but removed from the devfile locally using `odo service delete`.
## Getting information about a service
You can get the details about a service such as its kind, version, name and list of configured parameters with the command:
```
odo service describe
```
For example:
```
$ odo service describe Redis/my-redis-service
Version: redis.redis.opstreelabs.in/v1beta1
Kind: Redis
Name: my-redis-service
Parameters:
NAME VALUE
kubernetesConfig.image quay.io/opstree/redis:v6.2.5
kubernetesConfig.serviceType ClusterIP
redisExporter.image quay.io/opstree/redis-exporter:1.0
```

View File

@@ -0,0 +1,101 @@
---
title: odo storage
sidebar_position: 10
---
odo lets users manage storage volumes attached to the components. A storage volume can be either an ephemeral volume using an `emptyDir` Kubernetes volume, or a [PVC](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim), which is a way for users to "claim" a persistent volume (such as a GCE PersistentDisk or an iSCSI volume) without understanding the details of the particular cloud environment. The persistent storage volume can be used to persist data across restarts and rebuilds of the component.
### Adding a storage volume
We can add a storage volume to the cluster using `odo storage create`.
```shell
odo storage create
```
For example:
```shell
$ odo storage create store --path /data --size 1Gi
✓ Added storage store to nodejs-project-ufyy
$ odo storage create tempdir --path /tmp --size 2Gi --ephemeral
✓ Added storage tempdir to nodejs-project-ufyy
Please use `odo push` command to make the storage accessible to the component
```
In the above example, the first storage volume has been mounted to the `/data` path and has a size of `1Gi`,
and the second volume has been mounted to `/tmp` and is ephemeral.
### Listing the storage volumes
We can check the storage volumes currently used by the component using `odo storage list`.
```shell
odo storage list
```
For example:
```shell
$ odo storage list
The component 'nodejs-project-ufyy' has the following storage attached:
NAME SIZE PATH STATE
store 1Gi /data Not Pushed
tempdir 2Gi /tmp Not Pushed
```
### Deleting a storage volume
We can delete a storage volume using `odo storage delete`.
```shell
odo storage delete
```
For example:
```shell
$ odo storage delete store -f
Deleted storage store from nodejs-project-ufyy
Please use `odo push` command to delete the storage from the cluster
```
In the above example, using `-f` forcefully deletes the storage without asking user permission.
### Adding storage to specific container
If your devfile has multiple containers, you can specify to which container you want the
storage to attach to using the `--container` flag in the `odo storage create` command.
Following is an excerpt from an example devfile with multiple containers :
```yaml
components:
- name: runtime
container:
image: registry.access.redhat.com/ubi8/nodejs-12:1-36
memoryLimit: 1024Mi
endpoints:
- name: "3000-tcp"
targetPort: 3000
mountSources: true
- name: funtime
container:
image: registry.access.redhat.com/ubi8/nodejs-12:1-36
memoryLimit: 1024Mi
```
Here, we have two containers - `runtime` and `funtime`. To attach a storage, only to the `funtime` container, we can do
```shell
odo storage create --container
```
```shell
$ odo storage create store --path /data --size 1Gi --container funtime
✓ Added storage store to nodejs-testing-xnfg
Please use `odo push` command to make the storage accessible to the component
```
You can list the same, using `odo storage list` command
```shell
$ odo storage list
The component 'nodejs-testing-xnfg' has the following storage attached:
NAME SIZE PATH CONTAINER STATE
store 1Gi /data funtime Not Pushed
```

View File

@@ -0,0 +1,11 @@
---
title: Contributing to odo
sidebar_position: 50
---
* [Contributing to code](https://github.com/redhat-developer/odo/wiki/Developer-Guidelines)
* [Writing and running tests](https://github.com/redhat-developer/odo/wiki/Writing-and-running-tests)
* [Contributing to docs](https://github.com/redhat-developer/odo/wiki/Contributing-to-Docs)
* [Release Guideline](https://github.com/redhat-developer/odo/wiki/Release-Guideline)
* [Reviewing PR](https://github.com/redhat-developer/odo/wiki/PR-Review)
* [Logging in odo](https://github.com/redhat-developer/odo/wiki/Logging-in-odo)
* [Getting involved in the community](https://github.com/redhat-developer/odo/wiki/Getting-involved-in-the-Community)

View File

@@ -0,0 +1,4 @@
{
"label": "Getting Started",
"position": 2
}

View File

@@ -0,0 +1,38 @@
---
title: Basics
sidebar_position: 2
---
# Concepts of odo
`odo` abstracts Kubernetes concepts into a developer friendly terminology; in this document, we will take a look at the following terminologies:
### Application
An application in `odo` is a classic application developed with a [cloud-native approach](https://www.redhat.com/en/topics/cloud-native-apps) that is used to perform a particular task.
Examples of applications: Online Video Streaming, Hotel Reservation System, Online Shopping.
### Component
In the cloud-native architecture, an application is a collection of small, independent, and loosely coupled components; a `odo` component is one of these components.
Examples of components: API Backend, Web Frontend, Payment Backend.
### Project
A project helps achieve multi-tenancy: several applications can be run in the same cluster by different teams in different projects.
### Context
Context is the directory on the system that contains the source code, tests, libraries and `odo` specific config files for a single component.
### URL
A URL exposes a component to be accessed from outside the cluster.
### Storage
Storage is the persistent storage in the cluster: it persists the data across restarts and any rebuilds of a component.
### Service
Service is an external application that a component can connect to or depend on to gain a additional functionality.
Example of services: PostgreSQL, MySQL, Redis, RabbitMQ.
### Devfile
Devfile is a portable YAML file containing the definition of a component and its related URLs, storages and services. Visit [devfile.io](https://devfile.io/) for more information on devfiles.

View File

@@ -0,0 +1,4 @@
{
"label": "Cluster Setup",
"position": 4
}

View File

@@ -0,0 +1,209 @@
---
title: Kubernetes
sidebar_position: 1
---
# Setting up a Kubernetes cluster
## Introduction
This guide is helpful in setting up a development environment intended to be used with `odo`; this setup is not recommended for a production environment.
`odo` can be used with ANY Kubernetes cluster. However, this development environment will ensure complete coverage of all features of `odo`.
## Prerequisites
* You have a Kubernetes cluster set up (such as [minikube](https://minikube.sigs.k8s.io/docs/start/))
* You have admin privileges to the cluster
**Important notes:** `odo` will use the __default__ ingress and storage provisioning on your cluster. If they have not been set correctly, see our [troubleshooting guide](/docs/getting-started/cluster-setup/kubernetes#troubleshooting) for more details.
## Summary
* An Ingress controller in order to use `odo url create`
* Operator Lifecycle Manager in order to use `odo service create`
* (Optional) Service Binding Operator in order to use `odo link`
## Installing an Ingress controller
Creating an Ingress controller is required to use the `odo url create` feature.
This can be enabled by installing [an Ingress addon as per the Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/) such as: the built-in one on [minikube](https://minikube.sigs.k8s.io/) or [NGINX Ingress](https://kubernetes.github.io/ingress-nginx/).
**IMPORTANT:** `odo` cannot specify an Ingress controller and will use the *default* Ingress controller.
If you are unable to access your components, check that your [default Ingress controller](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) has been set correctly.
### Minikube
To install an Ingress controller on a minikube cluster, enable the **ingress** addon with the following command:
```shell
minikube addons enable ingress
````
### NGINX Ingress
To enable the Ingress feature on a Kubernetes cluster _other than minikube_, we reccomend to use the [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/).
On the default installation method, you will need to set NGINX Ingress as your [default Ingress controller](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do), so `odo` may deploy URLs correctly.
### Other Ingress controllers
For a list of all available Ingress controllers see the [the Ingress controller documentation](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
To learn more about enabling this feature on your cluster, see the [Ingress prerequisites](https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites) on the official kubernetes documentation.
## Installing the Operator Lifecycle Manager (OLM)
Installing the Operator Lifecycle Manager (OLM) is required to use the `odo service create` feature.
The [Operator Lifecycle Manager (OLM)](https://olm.operatorframework.io/) is an open source toolkit to manage Kubernetes native applications, called Operators, in a streamlined and scalable way.
`odo` utilizes Operators in order to create and link services to applications.
The following command will install OLM cluster-wide as well as create two new namespaces: `olm` and `operators`.
```shell
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.0/install.sh | bash -s v0.20.0
```
Running the script will take some time to install all the necessary resources in the Kubernetes cluster including the `OperatorGroup` resource.
Note: Check the OLM [release page](https://github.com/operator-framework/operator-lifecycle-manager/releases/) for the latest release.
### Installing an Operator
Installing an Operator allows you to install a service such as Postgres, Redis or DataDog.
To install an operator from the OperatorHub website:
1. Visit the [OperatorHub](https://operatorhub.io) website.
2. Search for an Operator of your choice.
3. Navigate to its detail page.
4. Click on **Install**.
5. Follow the instruction in the installation popup. Please make sure to install the Operator in your desired namespace or cluster-wide, depending on your choice and the Operator capability.
6. [Verify the Operator installation](#verifying-the-operator-installation).
### Verifying the Operator installation
Once the Operator is successfully installed on the cluster, you can use `odo` to verify the Operator installation and see the CRDs associated with it; run the following command:
```shell
odo catalog list services
```
The output may look similar to:
```shell
odo catalog list services
Services available through Operators
NAME CRDs
datadog-operator.v0.6.0 DatadogAgent, DatadogMetric, DatadogMonitor
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
```
If you do not see your installed Operator in the list, follow the [troubleshooting guide](#troubleshoot-the-operator-installation) to find the issue and debug it.
### Troubleshooting the Operator installation
There are two ways to confirm that the Operator has been installed properly.
The examples you may see in this guide use [Datadog Operator](https://operatorhub.io/operator/datadog-operator) and [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator).
1. Verify that its pod started and is in “Running” state.
```shell
kubectl get pods -n operators
```
The output may look similar to:
```shell
kubectl get pods -n operators
NAME READY STATUS RESTARTS AGE
datadog-operator-manager-5db67c7f4-hgb59 1/1 Running 0 2m13s
service-binding-operator-c8d7587b8-lxztx 1/1 Running 5 6d23h
```
2. Verify that the ClusterServiceVersion (csv) resource is in Succeeded or Installing phase.
```shell
kubectl get csv -n operators
```
The output may look similar to:
```shell
kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
datadog-operator.v0.6.0 Datadog Operator 0.6.0 datadog-operator.v0.5.0 Succeeded
service-binding-operator.v0.9.1 Service Binding Operator 0.9.1 service-binding-operator.v0.9.0 Succeeded
```
If you see the value under PHASE column to be anything other than _Installing_ or _Succeeded_, please take a look at the pods in `olm` namespace and ensure that the pod starting with name `operatorhubio-catalog` is in Running state:
```shell
kubectl get pods -n olm
NAME READY STATUS RESTARTS AGE
operatorhubio-catalog-x24dq 0/1 CrashLoopBackOff 6 9m40s
```
If you see output like above where the pod is in CrashLoopBackOff state or any other state other than Running, delete the pod:
```shell
kubectl delete pods/<operatorhubio-catalog-name> -n olm
```
### Checking to see if an Operator has been installed
For this example, we will check the [PostgreSQL Operator](https://operatorhub.io/operator/postgresql) installation.
Check `kubectl get csv` to see if your Operator exists:
```shell
$ kubectl get csv
NAME DISPLAY VERSION REPLACES PHASE
postgresoperator.v5.0.3 Crunchy Postgres for Kubernetes 5.0.3 postgresoperator.v5.0.2 Succeeded
```
If the `PHASE` is something other than `Succeeded`, you won't see it in `odo catalog list services` output, and you won't be able to create a working Operator backed service out of it either. You will have to wait patiently until `PHASE` says `Suceeded`.
## (Optional) Installing the Service Binding Operator
`odo` uses [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator) to provide the `odo link` feature which helps to connect an odo component to a service or another component.
The Service Binding Operator is _optional_ and is used to provide extra metadata support for `odo` deployments.
Operators can be installed in a specific namespace or across the cluster-wide.
```shell
kubectl create -f https://operatorhub.io/install/service-binding-operator.yaml
```
Running the command will create the necessary resource in the `operators` namespace.
If you want to access this resource from other namespaces as well, add your target namespace to `.spec.targetNamespaces` list in the `service-binding-operator.yaml` file before running `kubectl create`.
See [Verifying the Operator installation](#verifying-the-operator-installation) to ensure that the Operator was installed successfully.
## Troubleshooting
### Confirming your Ingress Controller functionality
`odo` will use the *default* Ingress Controller. By default, when you install an Ingress Controller such as [NGINX Ingress](https://kubernetes.github.io/ingress-nginx/), it will *not* be set as the default.
You must set it as the default Ingress Controller by modifying the annotation your IngressClass:
```sh
kubectl get IngressClass -A
kubectl edit IngressClass/YOUR-INGRESS -n YOUR-NAMESPACE
```
And add the following annotation:
```yaml
annotation:
ingressclass.kubernetes.io/is-default-class: "true"
```
### Confirming your Storage Provisioning functionality
`odo` deploys with [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). By default, when you install a [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) such as [GlusterFS](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs), it will *not* be set as the default.
You must set it as the default storage provisioner by modifying the annotation your StorageClass:
```sh
kubectl get StorageClass -A
kubectl edit StorageClass/YOUR-STORAGE-CLASS -n YOUR-NAMESPACE
```
And add the following annotation:
```yaml
annotation:
storageclass.kubernetes.io/is-default-class: "true"
```

View File

@@ -0,0 +1,70 @@
---
title: OpenShift
sidebar_position: 2
---
# Setting up a OpenShift cluster
## Introduction
This guide is helpful in setting up a development environment intended to be used with `odo`; this setup is not recommended for a production environment.
## Prerequisites
* You have a OpenShift cluster set up (such as [crc](https://crc.dev/crc/#installing-codeready-containers_gsg))
* You have admin privileges to the cluster
## Summary
* An Operator in order to use `odo service`
* (Optional) Service Binding Operator in order to use `odo link`
## Installing an Operator
Installing an Operator allows you to install a service such as PostgreSQL, Redis or DataDog.
To install an Operator from the OpenShift web console:
1. Login to the OpenShift web console with admin, and navigate to Operators > OperatorHub.
2. Make sure that the Project is set to All Projects.
3. Search for an Operator of your choice in the search box under **All Items**.
4. Click on the Operator; this should open a side pane.
5. Click on the **Install** button on the side pane; this should open an **Install Operator** page.
6. Set the **Installation mode**, **Installed Namespace** and **Approval Strategy** as per your requirement.
7. Click on the **Install** button.
8. Wait until the Operator is installed.
9. Once the Operator is installed, you should see _**Installed operator - ready for use**_, and a **View Operator** button appears on the page.
10. Click on the **View Operator** button; this should take you to Operators > Installed Operators > Operator details page, and you should be able to see details of your Operator.
### Verifying the Operator installation
Once the Operator is successfully installed on the cluster, you can use `odo` to verify the Operator installation and see the CRDs associated with it; run the following command:
```shell
odo catalog list services
```
The output may look similar to:
```shell
odo catalog list services
Services available through Operators
NAME CRDs
datadog-operator.v0.6.0 DatadogAgent, DatadogMetric, DatadogMonitor
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
```
## (Optional) Installing the Service Binding Operator
`odo` uses [Service Binding Operator](https://operatorhub.io/operator/service-binding-operator) to provide the `odo link` feature which helps to connect an odo component to a service or another component.
The Service Binding Operator is _optional_ and is used to provide extra metadata support for `odo` deployments.
To install the Service Binding Operator from the OpenShift web console:
1. Login to the OpenShift web console with admin, and navigate to Operators > OperatorHub.
2. Make sure that the Project is set to All Projects.
3. Search for _**Service Binding Operator**_ in the search box under **All Items**.
4. Click on the **Service Binding Operator**; this should open a side pane.
5. Click on the **Install** button on the side pane; this should open an **Install Operator** page.
6. Make sure the **Installation mode** is set to "_All namespaces on the cluster(default)_"; **Installed Namespace** is set to "_openshift-operators_"; and **Approval Strategy** is "_Automatic_".
7. Click on the **Install** button.
8. Wait until the Operator is installed.
9. Once the Operator is installed, you should see **_Installed operator - ready for use_**, and a **View Operator** button appears on the page.
10. Click on the **View Operator** button; this should take you to Operators > Installed Operators > Operator details page, and you should be able to see details of your Operator.

View File

@@ -0,0 +1,104 @@
---
title: Configuration
sidebar_position: 6
---
# Configuring odo global settings
The global settings for odo can be found in `preference.yaml` file; which is located by default in the `.odo` directory of the user's HOME directory.
Example:
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs
defaultValue="linux"
values={[
{label: 'Linux', value: 'linux'},
{label: 'Windows', value: 'windows'},
{label: 'Mac', value: 'mac'},
]}>
<TabItem value="linux">
```sh
/home/userName/.odo/preference.yaml
```
</TabItem>
<TabItem value="windows">
```sh
C:\\Users\userName\.odo\preference.yaml
```
</TabItem>
<TabItem value="mac">
```sh
/Users/userName/.odo/preference.yaml
```
</TabItem>
</Tabs>
---
A different location can be set for the `preference.yaml` by exporting `GLOBALODOCONFIG` in the user environment.
### View the configuration
To view the current configuration, run `odo preference view`.
```shell
odo preference view
```
Example:
```shell
$ odo preference view
PARAMETER CURRENT_VALUE
UpdateNotification
Timeout
PushTimeout
RegistryCacheTime
Ephemeral
ConsentTelemetry
```
### Set a configuration
To set a value for a preference key, run `odo preference set <key> <value>`.
```shell
odo preference set updatenotification false
```
Example:
```shell
$ odo preference set updatenotification false
Global preference was successfully updated
```
Note that the preference key is case-insensitive.
### Unset a configuration
To unset a value of a preference key, run `odo preference unset <key>`; use `-f` flag to skip the confirmation.
```shell
odo preference unset updatednotification
```
Example:
```shell
$ odo preference unset updatednotification
? Do you want to unset updatenotification in the preference (y/N) y
Global preference was successfully updated
```
Unsetting a preference key sets it to an empty value in the preference file. odo will use the [default value](./configure#preference-key-table) for such configuration.
### Preference Key Table
| Preference | Description | Default |
|--------------------|--------------------------------------------------------------------------------|------------------------|
| UpdateNotification | Control whether a notification to update odo is shown | True |
| NamePrefix | Set a default name prefix for an odo resource (component, storage, etc) | Current directory name |
| Timeout | Timeout for Kubernetes server connection check | 1 second |
| PushTimeout | Timeout for waiting for a component to start | 240 seconds |
| RegistryCacheTime | For how long (in minutes) odo will cache information from the Devfile registry | 4 Minutes |
| Ephemeral | Control whether odo should create a emptyDir volume to store source code | True |
| ConsentTelemetry | Control whether odo can collect telemetry for the user's odo usage | False |

View File

@@ -0,0 +1,32 @@
---
title: Features
sidebar_position: 1
---
# Features of odo
By using `odo`, application developers can develop, test, debug, and deploy microservices based applications on Kubernetes without having a deep understanding of the platform.
`odo` follows *create and push* workflow. As a user, when you *create*, the information (or manifest) is stored in a configuration file. When you *push* it gets created on the Kubernetes cluster. All of this gets stored in the Kubernetes API for seamless accessability and function.
`odo` uses *deploy and link* commands to link components and services together. `odo` achieves this by creating and deploying services based on [Kubernetes Operators](https://github.com/operator-framework/) in the cluster. Services can be created using any of the operators available on [OperatorHub.io](https://operatorhub.io). Upon linking this service, `odo` injects the service configuration into the service. Your application can then use this configuration to communicate with the Operator backed service.
### What can `odo` do?
Below is a summary of what `odo` can do with your Kubernetes cluster:
* Create a new manifest or existing one to deploy applications on Kubernetes cluster
* Provide commands to create and update the manifest without diving into Kubernetes configuration files
* Securely expose the application running on Kubernetes cluster to access it from developer's machine
* Add and remove additional storage to the application on Kubernetes cluster
* Create [Operator](https://github.com/operator-framework/) backed services and link with them
* Create a link between multiple microservices deployed as `odo` components
* Debug remote applications deployed using `odo` from the IDE
* Run tests on the applications deployed on Kubernetes
Take a look at the "Using odo" documentation for in-depth guides on doing advanced commands with `odo`.
### What features to expect in odo?
For a quick high level summary of the features we are planning to add, take a look at odo's [milestones on GitHub](https://github.com/redhat-developer/odo/milestones).

View File

@@ -0,0 +1,289 @@
---
title: Installation
sidebar_position: 3
---
`odo` can be used as either a [CLI tool](/docs/getting-started/installation#cli-binary-installation) or an [IDE plugin](/docs/getting-started/installation#ide-installation) on Mac, Windows or Linux.
## CLI installation
Each release is *signed*, *checksummed*, *verified*, and then pushed to our [binary mirror](https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/).
For more information on the changes of each release, they can be viewed either on [GitHub](https://github.com/redhat-developer/odo/releases) or the [blog](/blog).
### Linux
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs
defaultValue="amd64"
values={[
{label: 'Intel / AMD 64', value: 'amd64'},
{label: 'ARM 64', value: 'arm64'},
{label: 'PowerPC', value: 'ppc64le'},
{label: 'IBM Z', value: 's390x'},
]}>
<TabItem value="amd64">
Installing `odo` on `amd64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="arm64">
Installing `odo` on `arm64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-arm64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-arm64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="ppc64le">
Installing `odo` on `ppc64le` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-ppc64le -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-ppc64le.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="s390x">
Installing `odo` on `s390x` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-s390x -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-s390x.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
</Tabs>
---
### MacOS
<Tabs
defaultValue="intel"
values={[
{label: 'Intel', value: 'intel'},
{label: 'Apple Silicon', value: 'arm'},
]}>
<TabItem value="intel">
Installing `odo` on `amd64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
chmod +x ./odo
sudo mv ./odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
<TabItem value="arm">
Installing `odo` on `arm64` architecture:
1. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-arm64 -o odo
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-arm64.sha256 -o odo.sha256
echo "$(<odo.sha256) odo" | shasum -a 256 --check
```
3. Install odo
```shell
chmod +x ./odo
sudo mv ./odo /usr/local/bin/odo
```
4. (Optional) If you do not have root access, you can install `odo` to the local directory and add it to your `$PATH`:
```shell
mkdir -p $HOME/bin
cp ./odo $HOME/bin/odo
export PATH=$PATH:$HOME/bin
# (Optional) Add the $HOME/bin to your shell initialization file
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
</TabItem>
</Tabs>
---
### Windows
1. Open a PowerShell terminal
2. Download the latest release from the mirror:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-windows-amd64.exe -o odo.exe
```
2. (Optional) Verify the downloaded binary with the SHA-256 sum:
```shell
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-windows-amd64.exe.sha256 -o odo.exe.sha256
# Visually compare the output of both files
Get-FileHash odo.exe
type odo.exe.sha256
```
4. Add the binary to your `PATH`
### Installing from source code
1. Clone the repository and cd into it.
```shell
git clone https://github.com/redhat-developer/odo.git
cd odo
```
2. Install tools used by the build and test system.
```shell
make goget-tools
```
3. Build the executable from the sources in `cmd/odo`.
```shell
make bin
```
4. Check the build version to verify that it was built properly.
```shell
./odo version
```
5. Install the executable in the system's GOPATH.
```shell
make install
```
6. Check the binary version to verify that it was installed properly; verify that it is same as the build version.
```shell
odo version
```
## IDE Installation
### Installing `odo` in Visual Studio Code (VSCode)
The [OpenShift VSCode extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-openshift-connector) uses both `odo` and `oc` binary to interact with Kubernetes or OpenShift cluster.
1. Open VS Code.
2. Launch VS Code Quick Open (Ctrl+P)
3. Paste the following command:
```shell
ext install redhat.vscode-openshift-connector
```

View File

@@ -0,0 +1,226 @@
---
title: Quickstart Guide
sidebar_position: 5
---
# Quickstart Guide
In this guide, we will be using odo to create a to-do list application, with the following:
* ReactJS for the frontend
* Java Spring Boot for the backend
* PostgreSQL to store all persistent data
At the end of the guide, you will be able to list, add and delete to-do items from the web browser.
## Prerequisites
* Have the odo binary [installed](./installation.md).
* A [Kubernetes cluster](/docs/getting-started/cluster-setup/kubernetes) set up with a [ingress controller](/docs/getting-started/cluster-setup/kubernetes#installing-an-ingress-controller), [operator lifecycle manager](/docs/getting-started/cluster-setup/kubernetes#installing-the-operator-lifecycle-manager-olm) and (optional) [service binding operator](/docs/getting-started/cluster-setup/kubernetes#installing-the-service-binding-operator).
* Or a [OpenShift cluster](/docs/getting-started/cluster-setup/openshift) set up with the (optional) [service binding operator](/docs/getting-started/cluster-setup/openshift#installing-the-service-binding-operator)
## Clone the quickstart guide
Clone the [quickstart](https://github.com/odo-devfiles/odo-quickstart) repo from GitHub:
```shell
git clone https://github.com/odo-devfiles/odo-quickstart
cd odo-quickstart
```
## Create a project
We will create a project named `quickstart` on the cluster to keep all quickstart-related activities separate from rest of the cluster:
```shell
odo project create quickstart
```
## Create the frontend Node.JS component
Our frontend component is a React application that communicates with the backend component.
We will use the catalog command to list all available components and find `nodejs`:
```shell
odo catalog list components
```
Example output of `odo catalog list components`:
```shell
Odo Devfile Components:
NAME DESCRIPTION REGISTRY
nodejs Stack with Node.js 14 DefaultDevfileRegistry
nodejs-angular Stack with Angular 12 DefaultDevfileRegistry
nodejs-nextjs Stack with Next.js 11 DefaultDevfileRegistry
nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry
...
```
Pick `nodejs` to create the frontend component:
```shell
cd frontend
odo create nodejs frontend
```
Create a URL in order to access the component in the browser:
```shell
odo url create --port 3000 --host <CLUSTER-HOSTNAME>
```
**Minikube users:** Use `minikube ip` to find out the hostname and then use `<MINIKUBE-HOSTNAME>.nip.io` for `--host`.
Push the component to the cluster:
```shell
odo push
```
The URL will be listed in the `odo push` output, or can be found in `odo url list`.
Browse the site and try it out! Note that you will not be able to add, remove or list the to-dos yet, as we have not linked the frontend and the backend components yet.
## Create the backend Java component
The backend application is a Java Spring Boot based REST API which will list, insert and delete to-dos from the database.
Find `java-springboot` in the catalog:
```shell
odo catalog list components
```
Example output of `odo catalog list components`:
```shell
Odo Devfile Components:
NAME DESCRIPTION REGISTRY
java-quarkus Quarkus with Java DefaultDevfileRegistry
java-springboot Spring Boot® using Java DefaultDevfileRegistry
java-vertx Upstream Vert.x using Java DefaultDevfileRegistry
...
```
Let's create the component below:
```shell
cd ../backend
odo create java-springboot backend
odo url create --port 8080 --host <CLUSTER-HOSTNAME>
odo push
```
Note, you will not be able to access `http://<YOUR-URL>/api/v1/todos` yet until we link the backend component to the database service.
## Create the Postgres service
Use `odo catalog list services` to list all available operators.
By default, [Operator Lifecycle Manager (OLM)](/docs/getting-started/cluster-setup/kubernetes#installing-the-operator-lifecycle-manager-olm) includes no Operators and they must be installed via [Operator Hub](https://operatorhub.io/)
Install the [Postgres Operator](https://operatorhub.io/operator/postgresql) on the cluster:
```shell
kubectl create -f https://operatorhub.io/install/postgresql.yaml
```
Find `postgresql` in the catalog:
```shell
odo catalog list services
```
Example output of `odo catalog list services`:
```shell
Services available through Operators
NAME CRDs
postgresoperator.v5.0.3 PostgresCluster
```
If you don't see the PostgreSQL Operator listed yet, it may still be installing. Check out our [Operator troubleshooting guide](/docs/getting-started/cluster-setup/kubernetes#checking-to-see-if-an-operator-has-been-installed) for more information.
[//]: # (This needs to fixed in the future and a parameter-based command added rather than a .yaml file)
[//]: # (Right now this is blocked on: https://github.com/redhat-developer/odo/issues/5215)
Create the service usng the provided `postgrescluster.yaml` file from [CrunchyData's Postgres guide](https://access.crunchydata.com/documentation/postgres-operator/5.0.0/tutorial/create-cluster/):
```sh
odo service create --from-file ../postgrescluster.yaml
````
The service from `postgrescluster.yaml` should now be added to your `devfile.yaml`, do a push to create the database on the cluster:
```shell
odo push
```
## Link the backend component and the service
Now we will link the the backend component (Java API) to the service (Postgres).
First, see if the service has been deployed:
```shell
odo service list
NAME MANAGED BY ODO STATE AGE
PostgresCluster/hippo Yes (backend) Pushed 3m42s
```
Link the backend component with the above service:
```shell
odo link PostgresCluster/hippo
```
Push the changes and `odo` will link the service to the component:
```shell
odo push
```
Now your service is linked to the backend component!
## Link the frontend and backend components
For our last step, we will now link the backend Java component (which also uses the Postgres service) and the frontend Node.JS component.
This will allow both to communicate with each other in order to store persistent data.
Change to the `frontend` component directory and link it to the backend:
```shell
cd ../frontend
odo link backend
```
Push the changes:
```shell
odo push
```
We're done! Now it's time to test your new multi-component and service application.
## Testing your application
### Frontend Node.JS component
Find out what URL is being used by the frontend:
```shell
odo url list
Found the following URLs for component frontend
NAME STATE URL PORT SECURE KIND
http-3000 Pushed http://<URL-OUTPUT> 3000 false ingress
```
Visit the link and type in some to-dos!
### Backend Java component
Let's see if each to-do is being stored in the backend api and database.
Find out what URL is being used by the backend:
```shell
odo url list
Found the following URLs for component backend
NAME STATE URL PORT SECURE KIND
8080-tcp Pushed http://<URL-OUTPUT> 8080 false ingress
```
When you `curl` or view the URL on your browser, you'll now see the list of your to-dos:
```yaml
curl http://<URL-OUTPUT>/api/v1/todos
[{"id":1,"description":"hello"},{"id":2,"description":"world"}]
```
## Further reading
Want to learn what else `odo` can do? Check out the [Tutorials](/docs/intro) on the sidebar.

View File

@@ -0,0 +1,51 @@
---
sidebar_position: 1
title: Introduction
---
### What is odo?
`odo` is a fast, iterative and straightforward CLI tool for developers who write, build, and deploy applications on Kubernetes.
We abstract the complex concepts of Kubernetes so you can focus on one thing: `code`.
Choose your favourite framework and `odo` will deploy it *fast* and *often* to your container orchestrator cluster.
`odo` is focused on [inner loop](./intro#what-is-inner-loop-and-outer-loop) development as well as tooling that would helps users transition to the [outer loop](./intro#what-is-inner-loop-and-outer-loop).
Brendan Burns, one of the co-founders of Kubernetes, said in the [book Kubernetes Patterns](https://www.redhat.com/cms/managed-files/cm-oreilly-kubernetes-patterns-ebook-f19824-201910-en.pdf):
> It (Kubernetes) is the foundation on which applications will be built, and it provides a large library of APIs and tools for building these applications, but it does little to provide the application or container developer with any hints or guidance for how these various pieces can be combined into a complete, reliable system that satisfies their business needs and goals.
`odo` satisfies that need by making Kubernetes development *super easy* for application developers and cloud engineer.
### What is "inner loop" and "outer loop"?
The **inner loop** consists of local coding, building, running, and testing the application -- all activities that you, as a developer, can control.
The **outer loop** consists of the larger team processes that your code flows through on its way to the cluster: code reviews, integration tests, security and compliance, and so on.
The inner loop could happen mostly on your laptop. The outer loop happens on shared servers and runs in containers, and is often automated with continuous integration/continuous delivery (CI/CD) pipelines.
Usually, a code commit to source control is the transition point between the inner and outer loops.
*([Source](https://developers.redhat.com/blog/2020/06/16/enterprise-kubernetes-development-with-odo-the-cli-tool-for-developers#improving_the_developer_workflow))*
### Why should I use `odo`?
You should use `odo` if:
* You love frameworks such as Node.js, Spring Boot or dotNet
* Your application is intended to run in a Kubernetes-like infrastructure
* You don't want to spend time fighting with DevOps and learning Kubernetes in order to deploy to your enterprise infrastructure
If you are an application developer wishing to deploy to Kubernetes easily, then `odo` is for you.
### How is odo different from `kubectl` and `oc`?
Both [`kubectl`](https://github.com/kubernetes/kubectl) and [`oc`](https://github.com/openshift/oc/) require deep understanding of Kubernetes and OpenShift concepts.
`odo` is different as it focuses on application developers and cloud engineers. Both `kubectl` and `oc` are DevOps oriented tools and help in deploying applications to and maintaining a Kubernetes cluster provided you know Kubernetes well.
`odo` is not meant to:
* Maintain a production Kubernetes cluster
* Perform sysadmin tasks against a Kubernetes cluster

View File

@@ -0,0 +1,4 @@
{
"label": "Tutorials",
"position": 5
}

View File

@@ -0,0 +1,79 @@
---
title: Creating Kubernetes resources
sidebar_position: 2
---
# Creating Kubernetes resources using odo
While odo is mainly focused on application developers who would like to care less about Kubernetes and more about getting their application running on top of it, it also tries to make things simple for application architects or devfile stack authors who are comfortable with Kubernetes. One such feature of odo that we will discuss in this guide is creation of Kubernetes resources like Pods, Deployments, Services (Kubernetes Services, not the Operator backed ones) and such using odo. Using this, if an advanced user would like to create some Kubernetes resources, they could edit the `devfile.yaml` and add it there. An `odo push` after the edit would create the resource on the cluster. A resource thus created would co-exist with an odo component.
In this guide, we will create an nginx Deployment using its Kubernetes manifest. We will write this manifest in the `devfile.yaml`. Upon doing `odo push`, you will be able to see a Deployment and its Pods along with the component on the Kubernetes cluster using `kubectl` commands.
## Create an odo component
As with other resources like URL, Storage and Services, to create a Kubernetes resource, we first need to have an odo component. We will keep it simple by using a nodejs starter project. In an empty directory, create a component using:
```shell
odo create nodejs --starter nodejs-starter
```
Example:
```shell
$ odo create nodejs --starter nodejs-starter
Devfile Object Validation
✓ Checking devfile existence [52416ns]
✓ Creating a devfile component from registry: stage [95517ns]
Validation
✓ Validating if devfile name is correct [97488ns]
Starter Project
✓ Downloading starter project nodejs-starter from https://github.com/odo-devfiles/nodejs-ex.git [593ms]
Please use `odo push` command to create the component with source deployed
```
## Reference the Deployment manifest in `devfile.yaml`
odo supports referencing a URI in the `devfile.yaml` so that you don't have to copy the entire manifest into it. Assuming you have a Deployment manifest like below stored in a file called `deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: quay.io/bitnami/nginx
ports:
- containerPort: 80
```
Add below to the `components` section in `devfile.yaml`. Note that there is already a `runtime` component in this section:
```yaml
- name: nginx-deploy
kubernetes:
uri: deployment.yaml
```
## Push to the cluster
Now you need to do `odo push`. odo will create the component and also the nginx deployment for you on the cluster. Note that, unlike for Operator backed service, odo won't show you any message indicating that a service or some resource was created on the cluster. This is by design because the feature is meant for advanced users who can play with resources created in this way through `kubectl` CLI. However, if odo fails to create a resource, it will error out and let you know about it.
See if the Deployment and its Pods were created on the cluster using:
```shell
kubectl get deploy
kubectl get pods
```
## Good to know
odo adds a Kubernetes label to the resources created in this way. It is `app.kubernetes.io/managed-by: odo`. odo also sets the `ownerReferences` for such objects to the underlying odo component so that when you do `odo delete`, such resources are deleted from the cluster. Other than that, odo doesn't help in managing such resources and users are expected to know how to do so.

View File

@@ -0,0 +1,127 @@
---
title: Debugging using devfile
sidebar_position: 6
---
### Debugging a component
Debugging your component involves port forwarding with the Kubernetes pod. Before you start, it is required that you have a `kind: debug` step located within your `devfile.yaml`.
The following `devfile.yaml` contains a `debug` step under the `commands` key:
```yaml
commands:
- id: install
exec:
component: runtime
commandLine: npm install
workingDir: /project
group:
kind: build
isDefault: true
- id: run
exec:
component: runtime
commandLine: npm start
workingDir: /project
group:
kind: run
isDefault: true
- id: debug
exec:
component: runtime
commandLine: npm run debug
workingDir: /project
group:
kind: debug
isDefault: true
```
### Debugging your devfile component via CLI
We will use the official [nodejs](https://github.com/odo-devfiles/registry/tree/master/devfiles/nodejs) example in our debugging session which includes the necessary `debug` step within `devfile.yaml`.
1. Download the example application:
```shell
odo create nodejs --starter nodejs-starter
```
For example:
```shell
$ odo create nodejs --starter nodejs-starter
Validation
✓ Checking devfile existence [11498ns]
✓ Checking devfile compatibility [15714ns]
✓ Creating a devfile component from registry: DefaultDevfileRegistry [17565ns]
✓ Validating devfile component [113876ns]
Starter Project
✓ Downloading starter project nodejs-starter from https://github.com/odo-devfiles/nodejs-ex.git [428ms]
Please use `odo push` command to create the component with source deployed
```
2. Push with the `--debug` flag which is required for all debugging deployments:
```shell
odo push --debug
```
For example:
```shell
$ odo push --debug
Validation
✓ Validating the devfile [29916ns]
Creating Kubernetes resources for component nodejs
✓ Waiting for component to start [38ms]
Applying URL changes
✓ URLs are synced with the cluster, no changes are required.
Syncing to component nodejs
✓ Checking file changes for pushing [1ms]
✓ Syncing files to the component [778ms]
Executing devfile commands for component nodejs
✓ Executing install command "npm install" [2s]
✓ Executing debug command "npm run debug" [1s]
Pushing devfile component nodejs
✓ Changes successfully pushed to component
```
NOTE: A custom debug command may be chosen via the `--debug-command="custom-step"` flag.
3. Port forward to the local port in order to access the debugging interface:
```shell
odo debug port-forward
```
For example:
```shell
$ odo debug port-forward
Started port forwarding at ports - 5858:5858
```
NOTE: A specific port may be specified using the `--local-port` flag
4. Open a separate terminal window and check if the debug session is running.
```shell
odo debug info
```
For example:
```shell
$ odo debug info
Debug is running for the component on the local port : 5858
```
5. Accessing the debugger:
The debugger is accessible through an assortment of tools. An example of setting up a debug interface would be through [VSCode's debugging interface](https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_remote-debugging).
```json
{
"type": "node",
"request": "attach",
"name": "Attach to remote",
"address": "TCP/IP address of process to be debugged",
"port": 5858
}
```

View File

@@ -0,0 +1,36 @@
---
title: Using odo on IBM-Z and Power
sidebar_position: 3
---
[//]: # (Add prerequisite section)
### Deploying your first devfile on IBM Z & Power
Since the [DefaultDevfileRegistry](https://registry.devfile.io/viewer) doesn't support IBM Z & Power now, you will need to create a secure private DevfileRegistry first. To create a new secure private DevfileRegistry, please check the doc [secure registry](../architecture/secure-registry.md).
The images can be used for devfiles on IBM Z & Power
|Language | Devfile Name | Description | Image Source | Supported Platform |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| dotnet | dotnet60 | Stack with .NET 6.0 | registry.access.redhat.com/ubi8/dotnet-60:6.0 | s390x |
| Go | go | Stack with the latest Go version | golang:latest | s390x |
| Java | java-maven | Upstream Maven and OpenJDK 11 | registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le |
| Java | java-openliberty | Open Liberty microservice in Java | registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le |
| Java | java-openliberty-gradle | Java application Gradle-built stack using the Open Liberty runtime | openliberty/application-stack:gradle-0.2 | s390x |
| Java | java-quarkus | Upstream Quarkus with Java+GraalVM | registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8 | s390x, ppc64le|
| Java | java-springboot | Spring Boot® using Java| registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le|
| Vert.x Java| java-vertx | Upstream Vert.x using Java | registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8 | s390x, ppc64le|
| Java | java-wildfly-bootable-jar | Java stack with WildFly in bootable Jar mode, OpenJDK 11 and Maven 3.5 | registry.access.redhat.com/ubi8/openjdk-11 | s390x |
| Node.JS | nodejs | Stack with NodeJS 12 | registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8 | s390x, ppc64le|
| Node.JS | nodejs-angular | Stack with Angular 12 | node:lts-slim | s390x |
| Node.JS | nodejs-nextjs | Stack with Next.js 11 | node:lts-slim | s390x |
| Node.JS | nodejs-nuxtjs | Stack with Nuxt.js 2 | node:lts | s390x |
| Node.JS | nodejs-react | Stack with React 17 | node:lts-slim | s390x |
| Node.JS | nodejs-svelte | Stack with Svelte 3 | node:lts-slim | s390x |
| Node.JS | nodejs-vue | Stack with Vue 3 | node:lts-slim | s390x |
| PHP | php-laravel | Stack with Laravel 8 | composer:latest | s390x |
| Python| python | Python Stack with Python 3.7 | registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8 | s390x, ppc64le|
| Django| python-django| Python3.7 with Django| registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8| s390x, ppc64le|
**Note**: Access to the Red Hat registry is required to use these images on IBM Power Systems & IBM Z.
[//]: # (Steps to use devfiles can be found in Deploying your first devfile)

View File

@@ -0,0 +1,274 @@
---
title: Deploying a Java Open Liberty application with PostgreSQL
sidebar_position: 1
---
This tutorial illustrates deploying a [Java Open Liberty](https://openliberty.io/) application with odo and linking it to an in-cluster PostgreSQL service in a minikube environment.
There are two roles in this example:
1. Cluster Admin - Prepare the cluster by installing the required Operators on the cluster.
2. Application Developer - Imports a Java application, creates a Database instance, and connect the application to the Database instance.
## Cluster admin
---
### Prerequisites
* This section assumes that you have installed [minikube and configured it](../getting-started/cluster-setup/kubernetes.md).
----
[//]: # (Move this section to Architecture > Service Binding or create a new Operators doc)
We will be using Operators in this guide. An Operator helps in deploying the instances of a given service, for example PostgreSQL, MySQL, Redis.
Furthermore, these Operators are "bind-able". Meaning, if they expose information necessary to connect to them, odo can help connect your component to their instances.
[//]: # (Move until here)
See the [Operator installation guide](../getting-started/cluster-setup/kubernetes.md) to install and configure an Operator on a minikube cluster.
The cluster admin must install two Operators into the cluster:
1. PostgreSQL Operator
2. Service Binding Operator
We will use [Dev4Devs PostgreSQL Operator](https://operatorhub.io/operator/postgresql-operator-dev4devs-com) found on the [OperatorHub](https://operatorhub.io) to demonstrate a sample use case. This Operator will be installed in `my-postgresql-operator-dev4devs-com` namespace by default, if you want to use another namespace, make sure that you add your namespace to `.spec.targetNamespaces` list in the definition file before installing it.
<details>
<summary>In case of IBM Z & Power, please see below part to install PostgreSQL Operator</summary>
#### Steps to install Dev4Devs PostgreSQL Operator on IBM Z and Power
Note: Since operator [Dev4Devs PostgreSQL Operator](https://operatorhub.io/operator/postgresql-operator-dev4devs-com) is only supported on x86, this section will use operator-registry image and PostgreSQL Operator image which are published on `quay.io/ibm` by default. For Z, use [operator-registry-s390x image](https://quay.io/repository/ibm/operator-registry-s390x) and [postgresql-operator-s390x image](https://quay.io/repository/ibm/postgresql-operator-s390x). For Power, use [operator-registry-ppc64le image](https://quay.io/repository/ibm/operator-registry-ppc64le) and [postgresql-operator-ppc64le image](https://quay.io/repository/ibm/postgresql-operator-ppc64le).
1. Create custom CatalogSource
```shell
oc apply -f https://raw.githubusercontent.com/redhat-developer/odo/main/docs/website/manifests/catalog-source-$(uname -m).yaml
```
2. Install PostgreSQL Operator from custom CatalogSource
```shell
oc create -f https://raw.githubusercontent.com/redhat-developer/odo/main/docs/website/manifests/postgresql-operator-dev4devs-com-IBM-Z-P.yaml
```
</details>
**Note**: We will use the `my-postgresql-operator-dev4devs-com` namespace for this guide.
## Application Developer
---
### Prerequisites
This section assumes that you have [installed `odo`](../getting-started/installation.md).
---
Since the PostgreSQL Operator installed in above step is available only in `my-postgresql-operator-dev4devs-com` namespace, ensure that odo uses this namespace to perform any tasks:
```shell
odo project set my-postgresql-operator-dev4devs-com
```
If you installed the Operator in a different namespace, ensure that odo uses it to perform any tasks:
```shell
odo project set <your-namespace>
```
### Importing the JPA MicroService
In this example we will use odo to manage a sample [Java JPA MicroService application](https://github.com/OpenLiberty/application-stack-samples.git).
1. Clone the sample application to your system:
```shell
git clone https://github.com/OpenLiberty/application-stack-samples.git
```
2. Go to the sample JPA app directory:
```shell
cd ./application-stack-samples/jpa
```
3. Initialize the application:
```shell
odo create java-openliberty mysboproj
```
`java-openliberty` is the type of your application and `mysboproj` is the name of your application.
4. Deploy the application to the cluster:
```shell
odo push --show-log
```
5. The application is now deployed to the cluster - you can view the status of the cluster, and the application test results by streaming the cluster logs of the component that we pushed to the cluster in the previous step.
```shell
odo log --follow
```
Notice the failing tests due to an `UnknownDatabaseHostException`:
```shell
[INFO] [err] java.net.UnknownHostException: ${DATABASE_CLUSTERIP}
[INFO] [err] at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
[INFO] [err] at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
[INFO] [err] at java.base/java.net.Socket.connect(Socket.java:609)
[INFO] [err] at org.postgresql.core.PGStream.<init>(PGStream.java:68)
[INFO] [err] at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:144)
[INFO] [err] ... 86 more
[ERROR] Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 0.706 s <<< FAILURE! - in org.example.app.it.DatabaseIT
[ERROR] testGetAllPeople Time elapsed: 0.33 s <<< FAILURE!
org.opentest4j.AssertionFailedError: Expected at least 2 people to be registered, but there were only: [] ==> expected: <true> but was: <false>
at org.example.app.it.DatabaseIT.testGetAllPeople(DatabaseIT.java:57)
[ERROR] testGetPerson Time elapsed: 0.047 s <<< ERROR!
java.lang.NullPointerException
at org.example.app.it.DatabaseIT.testGetPerson(DatabaseIT.java:41)
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] DatabaseIT.testGetAllPeople:57 Expected at least 2 people to be registered, but there were only: [] ==> expected: <true> but was: <false>
[ERROR] Errors:
[ERROR] DatabaseIT.testGetPerson:41 NullPointer
[INFO]
[ERROR] Tests run: 2, Failures: 1, Errors: 1, Skipped: 0
[INFO]
[ERROR] Integration tests failed: There are test failures.
```
Note: This error will be fixed at a later stage in the tutorial when we connect a database instance to this application.
6. You can also create a URL with `odo` to access the application:
```shell
odo url create --host $(minikube ip).nip.io
```
7. Push the URL to activate it:
```shell
odo push --show-log
```
8. Display the created URL:
```shell
odo url list
```
You will see a fully formed URL that can be used in a web browser:
```shell
$ odo url list
Found the following URLs for component mysboproj
NAME STATE URL PORT SECURE KIND
mysboproj-9080 Pushed http://mysboproj-9080.192.168.49.2.nip.io 9080 false ingress
```
10. Use the URL to navigate to the `CreatePerson.xhtml` data entry page to use the application:
In case of this tutorial, we will access `http://mysboproj-9080.192.168.49.2.nip.io/CreatePerson.xhtml`. Note that the URL could be different for you. Now, enter a name and age data using the form.
11. Click on the **Save** button when complete
Note that the entry of any data does not result in the data being displayed when you click on the "View Persons Record List" link, until we connect the application to a database.
### Creating a database to be used by the sample application
You can use the default configuration of the PostgreSQL Operator to start a Postgre database from it. But since our app uses few specific configuration values, lets make sure they are properly populated in the database service we start.
1. Store the YAML of the service in a file:
```shell
odo service create postgresql-operator.v0.1.1/Database --dry-run > db.yaml
```
2. Modify and add following values under `metadata:` section in the `db.yaml` file:
```yaml
name: sampledatabase
annotations:
service.binding/db_name: 'path={.spec.databaseName}'
service.binding/db_password: 'path={.spec.databasePassword}'
service.binding/db_user: 'path={.spec.databaseUser}'
```
This configuration ensures that when a database service is started using this file, appropriate annotations are added to it. Annotations help the Service Binding Operator in injecting those values into the application. Hence, the above configuration will help Service Binding Operator inject the values for `databaseName`, `databasePassword` and `databaseUser` into the application.
3. Change the following values under `spec:` section of the YAML file:
```yaml
databaseName: "sampledb"
databasePassword: "samplepwd"
databaseUser: "sampleuser"
```
4. Create the database from the YAML file:
```shell
odo service create --from-file db.yaml
```
```shell
odo push --show-log
```
This action will create a database instance pod in the `my-postgresql-operator-dev4devs-com` namespace. The application will be configured to use this database.
### Binding the database and the application
Now, the only thing that remains is to connect the DB and the application. We will use odo to create a link to the PostgreSQL Database Operator in order to access the database connection information.
1. List the service associated with the database created via the PostgreSQL Operator:
```shell
odo service list
```
Your output should look similar to the following:
```shell
$ odo service list
NAME MANAGED BY ODO STATE AGE
Database/sampledatabase Yes (mysboproj) Pushed 6m35s
```
2. Create a Service Binding Request between the application, and the database using the Service Binding Operator service created in the previous step:
```shell
odo link Database/sampledatabase
```
3. Push this link to the cluster:
```shell
odo push --show-log
```
After the link has been created and pushed, a secret will have been created containing the database connection data that the application requires.
You can inspect the new intermediate secret via the dashboard console in the `my-postgresql-operator-dev4devs-com` namespace by navigating to Secrets and clicking on the secret named `mysboproj-database-sampledatabase`: notice that it contains 4 pieces of data that are related to the connection information for your PostgreSQL database instance.
Use `minikube dashboard` to launch the dashboard console.
Note: Pushing the newly created link will terminate the existing application pod and start a new application pod that mounts this secret.
4. Once the new pod has initialized, you can see the secret database connection data as it is injected into the pod environment by executing the following:
```shell
odo exec -- bash -c 'export | grep DATABASE' \
declare -x DATABASE_CLUSTERIP="10.106.182.173" \
declare -x DATABASE_DB_NAME="sampledb" \
declare -x DATABASE_DB_PASSWORD="samplepwd" \
declare -x DATABASE_DB_USER="sampleuser"
```
Once the new version is up (there will be a slight delay until the application is available), navigate to the `CreatePerson.xhtml` using the URL created in a previous step. Enter the requested data and click the **Save** button.
Notice that you are re-directed to the `PersonList.xhtml` page, where your data is displayed having been input to the postgreSQL database and retrieved for display purposes.
You may inspect the database instance itself and query the table to see the data in place by using the postgreSQL command line tool, `psql`.
5. Navigate to the pod containing your db from the dashboard console. Use `minikube dashboard` to start the console.
6. Click on the terminal tab.
7. At the terminal prompt access `psql` for your database `sampledb`.
```shell
psql sampledb
```
Your output should look similar to the following:
```shell
sh-4.2$ psql sampledb
psql (12.3)
Type "help" for help.
sampledb=#
```
8. Issue the following SQL statement from your :
```postgresql
SELECT * FROM person;
```
9. You can see the data that appeared in the results of the test run:
```shell
sampledb=# SELECT * FROM person;
personid | age | name
----------+-----+---------
5 | 52 | person1
(1 row)
```

View File

@@ -0,0 +1,100 @@
---
title: Using devfile lifecycle events
sidebar_position: 5
---
odo uses devfile to build and deploy components. You can also use devfile events with a component during its lifecycle. The four different types of devfile events are `preStart`, `postStart`, `preStop` and `postStop`
Each event is an array of devfile commands to be executed. The devfile command to be executed should be of type `exec` or `composite`:
```yaml
components:
- name: runtime
container:
image: quay.io/eclipse/che-nodejs10-ubi:nightly
memoryLimit: 1024Mi
endpoints:
- name: "3000/tcp"
targetPort: 3000
mountSources: true
command: ['tail']
args: [ '-f', '/dev/null']
- name: "tools"
container:
image: quay.io/eclipse/che-nodejs10-ubi:nightly
mountSources: true
memoryLimit: 1024Mi
commands:
- id: copy
exec:
commandLine: "cp /tools/myfile.txt tools.txt"
component: tools
workingDir: /
- id: initCache
exec:
commandLine: "./init_cache.sh"
component: tools
workingDir: /
- id: connectDB
exec:
commandLine: "./connect_db.sh"
component: runtime
workingDir: /
- id: disconnectDB
exec:
commandLine: "./disconnect_db.sh"
component: runtime
workingDir: /
- id: cleanup
exec:
commandLine: "./cleanup.sh"
component: tools
workingDir: /
- id: postStartCompositeCmd
composite:
label: Copy and Init Cache
commands:
- copy
- initCache
parallel: true
events:
preStart:
- "connectDB"
postStart:
- "postStartCompositeCmd"
preStop:
- "disconnectDB"
postStop:
- "cleanup"
```
### preStart
PreStart events are executed as init containers for the project pod in the order they are specified.
The devfile command's `commandLine` and `workingDir` become the init container's command and as a result the devfile component container's `command` and `args` or the container image's `Command` and `Args` are overwritten. If a composite command with `parallel: true` is used, it will be executed sequentially as Kubernetes init containers only execute in sequence.
In the above example, PreStart is going to execute the devfile command `connectDB` as an init container for the odo component's main pod.
Caution should be exercised when using preStart with devfile container component that mount sources. File operations with preStart on the project sync directory may result in inconsistent behaviour.
Note that odo currently does not support preStart events.
### postStart
PostStart events are executed when the Kubernetes deployment for the odo component is created.
In the above example, PostStart is going to execute the composite command `postStartCompositeCmd` once the odo component's deployment is created and the pod is up and running. The composite command `postStartCompositeCmd` has sub-commands `copy` and `initCache` which will be executed in parallel.
### preStop
PreStop events are executed before the Kubernetes deployment for the odo component is deleted.
In the above example, PreStop is going to execute the devfile command `disconnectDB` before the odo component deployment is deleted.
### postStop
PostStop events are executed after the Kubernetes deployment for the odo component is deleted.
In the above example, PostStop will execute the devfile command `cleanup` after the component has been deleted.
Note that odo currently does not support postStop events.

View File

@@ -0,0 +1,34 @@
---
title: Using the odo.dev.push.path related attribute
sidebar_position: 4
---
`odo` uses the `odo.dev.push.path` related attribute from the devfile's run commands to push only the specified files and folders to the component.
The format of the attribute is `"odo.dev.push.path:<local_relative_path>": "<remote_relative_path>"`. We can mention multiple such attributes in the run command's `attributes` section.
```yaml
commands:
- id: dev-run
attributes:
"dev.odo.push.path:target/quarkus-app": "remote-target/quarkus-app"
"dev.odo.push.path:README.txt": "docs/README.txt"
exec:
component: tools
commandLine: "java -jar remote-target/quarkus-app/quarkus-run.jar -Dquarkus.http.host=0.0.0.0"
hotReloadCapable: true
group:
kind: run
isDefault: true
workingDir: $PROJECTS_ROOT
- id: dev-debug
exec:
component: tools
commandLine: "java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar remote-target/quarkus-app/quarkus-run.jar -Dquarkus.http.host=0.0.0.0"
hotReloadCapable: true
group:
kind: debug
isDefault: true
workingDir: $PROJECTS_ROOT
```
In the above example the contents of the `quarkus-app` folder, which is inside the `target` folder, will be pushed to the remote location of `remote-target/quarkus-app` and the file `README.txt` will be pushed to `doc/README.txt`. The local path is relative to the component's local folder. The remote location is relative to the folder containing the component's source code inside the container.

View File

@@ -0,0 +1,4 @@
{
"label": "Using odo",
"position": 3
}

View File

@@ -0,0 +1,164 @@
---
title: Create Component
sidebar_position: 1
sidebar_label: Creating components
---
# Creating components using odo
[Component](../getting-started/basics#component) is the most basic unit of operation for odo. And the way to create one is using `odo create` (short for `odo component create`) command.
In simplest terms, when you "create" an odo component, you populate your current working directory with the file `devfile.yaml`. A Devfile is a manifest file that contains information about various resources (URL, Storage, Services, etc.) that correspond to your component, and will be created on the Kubernetes cluster when you execute `odo push` command. Most odo commands will first modify (add or remove configuration from) this file, and then subsequent `odo push` will create or delete the resources from the Kubernetes cluster.
However, odo users are not expected to know how the `devfile.yaml` is organized; it is the odo commands that would create, update, or delete it.
One final thing to keep in mind - there can be only one odo component in a directory. Nesting odo components is not expected to work well. In other terms, if you have multiple parts (components), say frontend and backend, of your microservices application that you want to create odo components for, you should put them in separate directories and not try to nest them. Take a look at example structure below:
```shell
$ tree my-awesome-microservices-app
my-awesome-microservices-app
├── backend
│ └── devfile.yaml
└── frontend
└── devfile.yaml
```
In this guide, we are going to create a Spring Boot component and a Nodejs component to deploy parts of the [odo quickstart](https://github.com/dharmit/odo-quickstart) project to a Kubernetes cluster.
Let's clone the project first:
```shell
git clone https://github.com/dharmit/odo-quickstart
cd odo-quickstart
```
Next, create a project <!-- add link to project command reference here --> on the Kubernetes cluster in which we will be creating our component. This is to keep our Kubernetes cluster clean of the tasks we perform (this step is optional):
```shell
odo project create myproject
```
Alternatively, you could also use one of the existing projects on the cluster:
```shell
odo project list
```
Now, set the project in which you want to create the component:
```shell
# replace <project-name> with a valid value from the list
odo project set <project-name>
```
odo supports interactive and non-interactive ways of creating a component. We will create the Spring Boot component interactively and Nodejs component non-interactively. The Spring Boot component is in `backend` directory. It contains code for the REST API that our Nodejs component will be interacting with. This Nodejs component is in `frontend` directory.
## Creating a component interactively
To interactively create the Spring Boot component, `cd` into the cloned project (already done if you copy-pasted the command above), then `cd` into `backend` directory, and execute:
```shell
cd backend
odo create
```
You will be prompted with a few questions one after the another. Go through each one of them to create a component.
1. First question is about selecting the component type:
```shell
$ odo create
? Which devfile component type do you wish to create [Use arrows to move, enter to select, type to filter]
> java-maven
java-maven
java-openliberty
java-openliberty
java-quarkus
java-quarkus
java-springboot
```
By default, `java-maven` is selected for us. Since this is a Spring Boot application, we should be selecting `java-springboot`.
We can either scroll down to `java-springboot` using the arrow key, or start typing `spring` on the prompt. Typing `spring` will lead to odo filtering the component type based on your input.
2. Next, odo asks you to name the component:
```shell
$ odo create
? Which devfile component type do you wish to create java-springboot
? What do you wish to name the new devfile component (java-springboot) backend
```
Name it `backend`.
3. Next, odo asks you for the project in which you would like to create the component. Use the project `myproject` that we created earlier or the one you had set using `odo project set` command
```shell
$ odo create
? Which devfile component type do you wish to create java-springboot
? What do you wish to name the new devfile component java-springboot
? What project do you want the devfile component to be created in myproject
```
Now you will have a `devfile.yaml` in your current working directory. But odo is not done asking you questions yet.
4. Lastly, odo asks you if you would like to download a "starter project". Since we already cloned the odo-quickstart project, we answer in No by typing `n` and hitting the return key. We discuss starter projects later in [this document](#starter-projects):
```shell
$ odo create
? Which devfile component type do you wish to create java-springboot
? What do you wish to name the new devfile component java-springboot
? What project do you want the devfile component to be created in myproject
Devfile Object Validation
✓ Checking devfile existence [66186ns]
✓ Creating a devfile component from registry: stage [92202ns]
Validation
✓ Validating if devfile name is correct [99609ns]
? Do you want to download a starter project (y/N) n
```
Your Spring Boot component is now ready for use.
## Creating a component non-interactively
To non-interactively create the Nodejs component to deploy our frontend code, `cd` into the cloned `frontend` directory and execute:
```shell
# assuming you are in the odo-quickstart/backend directory
cd ../frontend
odo create nodejs frontend -n myproject
```
Here `nodejs` is the type of the component, `frontend` is the name of the component, and `-n myproject` tells odo to use the project `myproject` for the mentioned `odo create` operation.
## Starter projects
Besides creating a component for an existing code, you could also use "starter project" when creating a component.
Starter projects are example projects developed by the community to showcase the usability of devfiles. An odo user can use these starter projects by running `odo create` command in an empty directory.
### Starer projects in interactive mode
To interactively create a Java Spring Boot component using the starter project, you can follow the below steps:
```shell
mkdir myOdoComponent && cd myOdoComponent
odo create
```
In the questions that odo asks you next, provide answers like below:
```shell
$ odo create
? Which devfile component type do you wish to create java-springboot
? What do you wish to name the new devfile component myFirstComponent
? What project do you want the devfile component to be created in myproject
Devfile Object Validation
✓ Checking devfile existence [60122ns]
✓ Creating a devfile component from registry: stage [91411ns]
Validation
✓ Validating if devfile name is correct [35749ns]
? Do you want to download a starter project Yes
Starter Project
✓ Downloading starter project springbootproject from https://github.com/odo-devfiles/springboot-ex.git [716ms]
Please use `odo push` command to create the component with source deployed
```
### Starter projects in non-interactive mode
To non-interactively create a Java Spring Boot component using the starter project, you can follow the below steps:
```shell
mkdir myOdoComponent && cd myOdoComponent
odo create java-springboot myFirstComponent --starter springbootproject
```
## Push the component to Kubernetes
odo follows a "create & push" workflow for almost all the commands. Meaning, most odo commands won't create resources on Kubernetes cluster unless you run `odo push` command.
Among the various ways described above, irrespective of how you created the component, the next step to create the resources for our component on the cluster would be to run `odo push`.
Execute below command from the component directory of both the `frontend` and `backend` components:
```shell
odo push
```

View File

@@ -0,0 +1,66 @@
---
title: Create URLs using odo
sidebar_position: 2
sidebar_label: Create URL
---
In the [previous section](./create-component) we created two components — a Spring Boot application (`backend`) listening on port 8080 and a Nodejs application (`frontend`) listening on port 3000 — and pushed them to the Kubernetes cluster. These are also the respective default ports (8080 for Spring Boot and 3000 for Nodejs) for Spring Boot and Nodejs component types. In this guide, we will create URLs to access these components from the host system.
Note that the URLs we create in this section will only help you access the components in web browser; the application itself won't be usable till we create some services and links which we will cover in the next section.
## OpenShift
If you are using [Code Ready Containers (CRC)](https://github.com/code-ready/crc) or another form of OpenShift cluster, odo has already created URLs for you by using the [OpenShift Routes](https://docs.openshift.com/container-platform/latest/networking/routes/route-configuration.html) feature. Execute `odo url list` from the component directory of the `backend` and `frontend` components to get the URLs odo created for these components. If you observe the `odo push` output closely, odo prints the URL in it as well.
Below are example `odo url list` outputs for the backend and frontend components. Note that URLs would be different in your case:
```shell
# backend component
$ odo url list
Found the following URLs for component backend
NAME STATE URL PORT SECURE KIND
8080-tcp Pushed http://8080-tcp-app-myproject.hostname.com 8080 false route
# frontend component
$ odo url list
Found the following URLs for component frontend
NAME STATE URL PORT SECURE KIND
http-3000 Pushed http://http-3000-app-myproject.hostname.com 3000 false route
```
## Kubernetes
If you are using a Kubernetes cluster, you will have to create a URL using `odo url` command. This is because odo can not assume the host information to be used to create a URL. To be able to create URLs on a Kubernetes cluster, please make sure that you have [Ingress Controller](/docs/getting-started/cluster-setup/kubernetes/#enabling-ingress) installed.
If you are working on a [minikube](/docs/getting-started/cluster-setup/kubernetes), Ingress can be enabled using:
```shell
minikube addons enable ingress
```
If you are working on any other kind of Kubernetes cluster, please check with your cluster administrator to enable the Ingress Controller. In this guide, we cover URL creation for minikube setup. For any other Kubernetes cluster, please replace `$(minikube ip).nip.io` in below commands with the host information for your specific cluster.
### Backend component
Our backend component, which is based on Spring Boot, listens on port 8080. `cd` into the directory for this component and execute below command:
```shell
odo url create --port 8080 --host $(minikube ip).nip.io
odo push
```
odo follows a "create & push" workflow for most commands. But in this case, adding `--now` flag to `odo url create` could reduce two commands into a single command:
```shell
odo url create --port 8080 --host $(minikube ip).nip.io --now
````
### Frontend component
Our frontend component, which is based on Nodejs, listens on port 3000. `cd` into the directory for this component and execute below command:
```shell
odo url create --port 3000 --host $(minikube ip).nip.io
odo push
```
Again, if you would prefer to get this done in a single command:
```shell
odo url create --port 3000 --host $(minikube ip).nip.io --now
```

View File

@@ -0,0 +1,8 @@
{
"version-3.0.0-alpha1/tutorialSidebar": [
{
"type": "autogenerated",
"dirName": "."
}
]
}

View File

@@ -0,0 +1,3 @@
[
"3.0.0"
]

8140
docs/website/yarn.lock Normal file

File diff suppressed because it is too large Load Diff