Object Storage
Loki Operator supports AWS S3, Azure, GCS, Minio, OpenShift Data Foundation and Swift for LokiStack object storage.
Note: Upon setting up LokiStack for any object storage provider, you should configure a logging collector that references the LokiStack in order to view the logs.
AWS S3
Requirements
- Create a bucket on AWS.
Installation
-
Deploy the Loki Operator to your cluster.
-
Create an Object Storage secret with keys as follows:
kubectl create secret generic lokistack-dev-s3 \ --from-literal=bucketnames="<BUCKET_NAME>" \ --from-literal=endpoint="<AWS_BUCKET_ENDPOINT>" \ --from-literal=access_key_id="<AWS_ACCESS_KEY_ID>" \ --from-literal=access_key_secret="<AWS_ACCESS_KEY_SECRET>" \ --from-literal=region="<AWS_REGION_YOUR_BUCKET_LIVES_IN>"or with
SSE-KMSencryptionkubectl create secret generic lokistack-dev-s3 \ --from-literal=bucketnames="<BUCKET_NAME>" \ --from-literal=endpoint="<AWS_BUCKET_ENDPOINT>" \ --from-literal=access_key_id="<AWS_ACCESS_KEY_ID>" \ --from-literal=access_key_secret="<AWS_ACCESS_KEY_SECRET>" \ --from-literal=region="<AWS_REGION_YOUR_BUCKET_LIVES_IN>" \ --from-literal=sse_type="SSE-KMS" \ --from-literal=sse_kms_key_id="<AWS_SSE_KMS_KEY_ID>" \ --from-literal=sse_kms_encryption_context="<OPTIONAL_AWS_SSE_KMS_ENCRYPTION_CONTEXT_JSON>"See also official docs on AWS KMS Key ID and AWS KMS Encryption Context (Note: Only content without newlines allowed, because it is exposed via environment variable to the containers).
or with
SSE-S3encryptionkubectl create secret generic lokistack-dev-s3 \ --from-literal=bucketnames="<BUCKET_NAME>" \ --from-literal=endpoint="<AWS_BUCKET_ENDPOINT>" \ --from-literal=access_key_id="<AWS_ACCESS_KEY_ID>" \ --from-literal=access_key_secret="<AWS_ACCESS_KEY_SECRET>" \ --from-literal=region="<AWS_REGION_YOUR_BUCKET_LIVES_IN>" \ --from-literal=sse_type="SSE-S3"Additionally, you can control the S3 URL style access behavior with
forcepathstyleparameter:kubectl create secret generic lokistack-dev-s3 \ --from-literal=bucketnames="<BUCKET_NAME>" \ --from-literal=endpoint="<AWS_BUCKET_ENDPOINT>" \ --from-literal=access_key_id="<AWS_ACCESS_KEY_ID>" \ --from-literal=access_key_secret="<AWS_ACCESS_KEY_SECRET>" \ --from-literal=region="<AWS_REGION_YOUR_BUCKET_LIVES_IN>" \ --from-literal=forcepathstyle="true"By default:
- AWS endpoints (ending with
.amazonaws.com) use virtual hosted style (forcepathstyle=false) - Non-AWS endpoints use path style (
forcepathstyle=true)
Set
forcepathstyletofalseif you need to use virtual-hosted style with non-AWS S3 compatible services.where
lokistack-dev-s3is the secret name. - AWS endpoints (ending with
-
Create an instance of LokiStack by referencing the secret name and type as
s3:spec: storage: secret: name: lokistack-dev-s3 type: s3
Azure
Requirements
- Create a bucket on Azure.
Installation
-
Deploy the Loki Operator to your cluster.
-
Create an Object Storage secret with keys as follows:
kubectl create secret generic lokistack-dev-azure \ --from-literal=container="<AZURE_CONTAINER_NAME>" \ --from-literal=environment="<AZURE_ENVIRONMENTs>" \ --from-literal=account_name="<AZURE_ACCOUNT_NAME>" \ --from-literal=account_key="<AZURE_ACCOUNT_KEY>" \ --from-literal=endpoint_suffix="<OPTIONAL_AZURE_ENDPOINT_SUFFIX>"where
lokistack-dev-azureis the secret name. -
Create an instance of LokiStack by referencing the secret name and type as
azure:spec: storage: secret: name: lokistack-dev-azure type: azure
Google Cloud Storage
Requirements
- Create a project on Google Cloud Platform.
- Create a bucket under same project.
- Create a service account under same project for GCP authentication.
Installation
-
Deploy the Loki Operator to your cluster.
-
Copy the service account credentials received from GCP into a file name
key.json. -
Create an Object Storage secret with keys
bucketnameandkey.jsonas follows:kubectl create secret generic lokistack-dev-gcs \ --from-literal=bucketname="<BUCKET_NAME>" \ --from-file=key.json="<PATH/TO/KEY.JSON>"where
lokistack-dev-gcsis the secret name,<BUCKET_NAME>is the name of bucket created in requirements step and<PATH/TO/KEY.JSON>is the file path where thekey.jsonwas copied to. -
Create an instance of LokiStack by referencing the secret name and type as
gcs:spec: storage: secret: name: lokistack-dev-gcs type: gcs
Minio
Requirements
-
Deploy Minio on your Cluster, e.g. using the Minio Operator
-
Create a bucket on Minio via CLI.
Installation
-
Deploy the Loki Operator to your cluster.
-
Create an Object Storage secret with keys as follows:
kubectl create secret generic lokistack-dev-minio \ --from-literal=bucketnames="<BUCKET_NAME>" \ --from-literal=endpoint="<MINIO_BUCKET_ENDPOINT>" \ --from-literal=access_key_id="<MINIO_ACCESS_KEY_ID>" \ --from-literal=access_key_secret="<MINIO_ACCESS_KEY_SECRET>"where
lokistack-dev-miniois the secret name. -
Create an instance of LokiStack by referencing the secret name and type as
s3:spec: storage: secret: name: lokistack-dev-minio type: s3
OpenShift Data Foundation
Requirements
- Deploy the OpenShift Data Foundation on your cluster.
Installation
-
Deploy the Loki Operator to your cluster.
-
Create an ObjectBucketClaim in
openshift-loggingnamespace:apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf -
Get bucket properties from the associated ConfigMap:
BUCKET_HOST=$(kubectl get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=$(kubectl get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=$(kubectl get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}') -
Get bucket access key from the associated Secret:
ACCESS_KEY_ID=$(kubectl get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=$(kubectl get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)- Create an Object Storage secret with keys as follows:
kubectl create -n openshift-logging secret generic lokistack-dev-odf \ --from-literal=access_key_id="${ACCESS_KEY_ID}" \ --from-literal=access_key_secret="${SECRET_ACCESS_KEY}" \ --from-literal=bucketnames="${BUCKET_NAME}" \ --from-literal=endpoint="https://${BUCKET_HOST}:${BUCKET_PORT}"Where
lokistack-dev-odfis the secret name. The values forACCESS_KEY_ID,SECRET_ACCESS_KEY,BUCKET_NAME,BUCKET_HOSTandBUCKET_PORTare taken from your ObjectBucketClaim’s accompanied secret and ConfigMap. -
Create an instance of LokiStack by referencing the secret name and type as
s3:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: storage: secret: name: lokistack-dev-odf type: s3 tls: caName: openshift-service-ca.crt tenants: mode: openshift-logging
Swift
Requirements
- Create a bucket on Swift.
Installation
-
Deploy the Loki Operator to your cluster.
-
Create an Object Storage secret with keys as follows:
kubectl create secret generic lokistack-dev-swift \ --from-literal=auth_url="<SWIFT_AUTH_URL>" \ --from-literal=username="<SWIFT_USERNAMEClaim>" \ --from-literal=user_domain_name="<SWIFT_USER_DOMAIN_NAME>" \ --from-literal=user_domain_id="<SWIFT_USER_DOMAIN_ID>" \ --from-literal=user_id="<SWIFT_USER_ID>" \ --from-literal=password="<SWIFT_PASSWORD>" \ --from-literal=domain_id="<SWIFT_DOMAIN_ID>" \ --from-literal=domain_name="<SWIFT_DOMAIN_NAME>" \ --from-literal=container_name="<SWIFT_CONTAINER_NAME>" \where
lokistack-dev-swiftis the secret name. -
Optionally you can provide project specific data and/or a region as follows:
kubectl create secret generic lokistack-dev-swift \ --from-literal=auth_url="<SWIFT_AUTH_URL>" \ --from-literal=username="<SWIFT_USERNAMEClaim>" \ --from-literal=user_domain_name="<SWIFT_USER_DOMAIN_NAME>" \ --from-literal=user_domain_id="<SWIFT_USER_DOMAIN_ID>" \ --from-literal=user_id="<SWIFT_USER_ID>" \ --from-literal=password="<SWIFT_PASSWORD>" \ --from-literal=domain_id="<SWIFT_DOMAIN_ID>" \ --from-literal=domain_name="<SWIFT_DOMAIN_NAME>" \ --from-literal=container_name="<SWIFT_CONTAINER_NAME>" \ --from-literal=project_id="<SWIFT_PROJECT_ID>" \ --from-literal=project_name="<SWIFT_PROJECT_NAME>" \ --from-literal=project_domain_id="<SWIFT_PROJECT_DOMAIN_ID>" \ --from-literal=project_domain_name="<SWIFT_PROJECT_DOMAIN_name>" \ --from-literal=region="<SWIFT_REGION>" \ -
Create an instance of LokiStack by referencing the secret name and type as
swift:spec: storage: secret: name: lokistack-dev-swift type: swift
AlibabaCloud OSS
Requirements
- Create a bucket on AlibabaCloud.
Installation
-
Deploy the Loki Operator to your cluster.
-
Create an Object Storage secret with keys as follows:
kubectl create secret generic lokistack-dev-alibabacloud \ --from-literal=bucket="<BUCKET_NAME>" \ --from-literal=endpoint="<OSS_BUCKET_ENDPOINT>" \ --from-literal=access_key_id="<OSS_ACCESS_KEY_ID>" \ --from-literal=secret_access_key="<OSS_ACCESS_KEY_SECRET>"where
lokistack-dev-alibabacloudis the secret name. -
Create an instance of LokiStack by referencing the secret name and type as
alibabacloud:spec: storage: secret: name: lokistack-dev-alibabacloud type: alibabacloud