AWS, GCP, Digital Ocean are supported out of the box. For installation in a different cloud provider or in your own datacenter, any S3 compatible Block Storage can be supplied, such as minio server.
Helm version 3.0+ (Helm CLI)
Kubectl CLI
A running Kubernetes cluster. Minimum of 2 cores and 4 GB memory required
Kafka for sourcing logs. Other topics to be created during installation
S3 credentials and a bucket to store index and data files
AVX2 support needed on compute machines
On linux (or unix machines) information about your cpu is in /proc/cpuinfo. You can extract information from there by hand, or with a grep command (grep flags /proc/cpuinfo). Also most compilers will automatically define AVX2 so you can check for that too
$ helm repo list or helm repo ls
NAME URL
episilia https://episilia.github.io/episilia-deploy/
Searching for charts in the repository:
$ helm search repo episilia
Step 2 : Configure values
Update global values in the episilia/episilia-spike master chart values.yaml file. All configurable values are explained in the section below titled "Configuration".
Inspect the values before installing application use below:
License key and client ID can be obtained fromEpisilia
Please fill out the form and the team will get back.
env denotes the client-identified environment. Examples include dev / qa / prod
global:client:id:episilia-helmenv:test-helmlicense:key:episiliarelease:version:*releasearn:""#to annotate service account "optional"
All common ops
Common ops config goes below.
ops:log:debug:"off"# Enable to get debug logs in all the serversmetrics:publish:interval:seconds:"300"# Time interval in which cpanel will be pushing metrics to consolestate:publish:interval:seconds:"10"# Time interval in which all servers will be pushing metrics to consolemonitor:memory:max:mb:"1024"# modify max memory for searching queries based on the memory usage of each search server
Kafka config
All kafka specific configuration goes below.
kafka:group:search:episilia-search-group#Kafka consumer-group for searchspike:episilia-spike-group#Kafka consumer-group for spikecpanel:episilia-cpanel-group#Kafka consumer-group for cpanelgateway:tail:episilia-gw-tail-group#Kafka consumer-group for gateway logwatcher:alert:episilia-lw-alert-group#Kafka consumer-group for alert tail:episilia-lw-tail-group#Kafka consumer-group for gateway s3log:files:episilia-s3files-group#Kafka consumer-group for s3logstopic:index:live:episilia-stagefiles#Topic to publish indexed files - stage.topicoptimized:episilia-optfiles# Topic for optimize.topic:publish file names post optimizationlabels:episilia-indexlabels#Topic to publish labels from indexeroptimize:request:episilia-stagefolder#optimize.request.topic send folders to optimizes3log:files:episilia-s3logs#Topic from where s3 logs files are loaded.cpanel:out:episilia-cpanel-out#Internal topic cpanel.data.topictail:request:in:episilia-tail-in#Incoming requests for tail requestsresponse:out:episilia-tail-out#publish results for tailalert:response:out:episilia-alert-out#publish alerts for spikeindexer:broker:list:redpanda:9092#The kafka broker for logs. If this is not set, it will use the default brokersecurity:mode:none#values are [none|login|oauth|kerberos]protocol:SASL_SSL#plaintext, ssl, sasl_plaintext, sasl_sslsasl:mechanism:SCRAM-SHA-512#PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER,GSSAPIusername:episiliapassword:episilia123rack:aware:"false"#enable "true" for rack awarenesslogs:topics:episilia-logs#Topic from where logs are loaded.group:episilia-indexer-group#Kafka consumer-group for indexerinternal:broker:list:redpanda:9092#Kafka broker for internal communication
Datastore
S3 bucket and folder details goes below.
datastore:s3:# (stage,final,sourcebucket for s3 log source)accesskey:""#Filestorage access key (eg: AWS S3 access key)secretkey:""#Filestorage secret key (eg: AWS S3 secret key) region:""#Filestorage region (eg: AWS S3 region)endpoint:url:#Filestorage endpoint URL (eg: AWS S3 endpoint URL)sign:payload:true# when using minio bucket disable payload for internal use.bucket:episilia-bucket#Filestorage bucket (eg: AWS S3 bucket)folder:episilia-folder#Filestorage folder URL (eg: AWS S3 folder)work:folder:work-folderurl:prefix:s3://useArn:false#enable it to access using ARN roleassumeRole:""# ARN rolehttps:"true"# when using minio bucket disable https for internal use.
Indexer
Config for indexer and optimizer.
indexer:image:repository:episilia/log-indexer#Docker image of episilia-log-indexertag:*releasereplicaCount:"1"#Kubernetes pod replicas of episilia-log-indexerannotations:deploy:service:resources:limits:cpu:"1"#CPU limit on episilia-log-indexer memory:2Gi#Memory limit on episilia-log-indexerrequests:cpu:400m#CPU request on episilia-log-indexermemory:300Mi#Memory request on episilia-log-indexerschema:appid:fixed:"defaultApp"#If appid is a fixed stringkeys:"app_id"#Label(s) for app identifiertenantid:fixed:"defaultTenant"#If tenantid is a fixed string keys:"tenant_id"#Label(s) for tenant identifier message:key:"log"#Actual log message keytimestamp:key:"time"#Timestamp keyformats:"%Y-%m-%dT%H:%M:%S"#Specify timestamp format (ex: %Y-%m-%dT%H:%M:%S )exclude:"time"#Labels to be excluded from the listlogs:source:kafka#Source: S3 or Kafkatail:enable:"true"#To enable Gateway servermaxwait:ms:"5000"#Time to get tail logsops:pause:consume:#Pauses injest at the below thresholdsfile:max:count:"100"#Applicable for file messagesrecord:max:count:"500000"#Applicable for log messagessize:max:mb:"100"#Applicable for log messagesdatablock:writer:count:"1"#Datablocks zipped and written to disk,in sequence filesjson:processor:count:"2"#Number of json parsersoptimize:block:maxbytes:mb:"50"indexers3:image:repository:episilia/log-indexer#Docker image of episilia-log-indexertag:*releasereplicaCount:"1"#Kubernetes pod replicas of episilia-log-indexer-s3resources:limits:cpu:"1"#CPU limit on episilia-log-indexer-s3memory:2Gi#Memory limit on episilia-log-indexer-s3requests:cpu:400m#CPU request on episilia-log-indexer-s3memory:300Mi#Memory request on episilia-log-indexer-s3optimizer:replicaCount:"1"#Kubernetes pod replicas of episilia-optimizer resources:limits:cpu:"1"#CPU limit on episilia-optimizermemory:2Gi#Memory limit on episilia-optimizerrequests:cpu:500m#CPU request on episilia-optimizer memory:300Mi
livesearch:image:repository:episilia/search#Docker image of episilia-searchtag:*releasereplicaCount:"1"#Kubernetes pod replicas of episilia-search resources:limits:cpu:"1"#CPU limit on episilia-searchmemory:2Gi#Memory limit on episilia-searchrequests:cpu:500m#CPU request on episilia-searchmemory:600Mi#Memory request on episilia-searchapi:timeout:seconds:60#Timeout for search while queryingrequest:max:days:"30"# "from" and "to" 30days can be querying search which can be modifiedlive:from:hours:48#Hours from when the required index blocks should be loadedto:hours:0#Hours till when the required index blocks should be loaded, Note: value to be "0" to get instant logsops:index:cache:resetonstart:"true"labels:display:max:count:"1000"#Labels count displayed in Spike-UI/Grafana
Config for ondemand search server goes below.
ondemandsearch:replicaCount:"1"#Kubernetes pod replicas of episilia-search-ondemandresources:limits:cpu:"1"#CPU limit on episilia-search-ondemandmemory:2Gi#Memory limit on episilia-search-ondemandrequests:cpu:500m#CPU request on episilia-search-ondemandmemory:600Mi#Memory request on episilia-search-ondemand prewarm:enabled:"false"#Set either yyyymmddhh pair or hours pair. If both are set the hours pair will be consideredfrom:hours:"2"#Hours from when the required labels should be loadedyyyymmddhh:" "#Date from when the required labels should be loaded (YYYYMMDDHH) or mske it "0" to load from hoursto:hours:"0"#Hours till when the required labels should be loadedyyyymmddhh:" "#Date till when the required labels should be loaded (YYYYMMDDHH) or mske it "0" to load from hoursops:index:cache:s3list:seconds:"600"# stage files will be kept as cache in server for specific seconds
Config for historic search server.
fixedSearch:bucket:""#S3 bucket for historic search to run parallelly, Note: if the value is empty it takes datastore.s3.bucket value as defaultfolder:""#S3 folder for historic search to run parallelly, Note: if the value is empty it takes datastore.s3.folder value as defaultreplicaCount:"1"#Kubernetes pod replicas of historic episilia-search resources:limits:cpu:"1"#CPU limit on historic episilia-searchmemory:2Gi#Memory limit on historic episilia-searchrequests:cpu:500m#CPU request on historic episilia-searchmemory:600Mi#Memory request on historic episilia-searchfixed:from:yyyymmddhh:"2021092100"#Date from when the required index blocks should be loaded (YYYYMMDDHH)to:yyyymmddhh:"2021092202"#Date till when the required index blocks should be loaded (YYYYMMDDHH)api:timeout:seconds:60#Timeout for search while querying
Gateway
Gateway specific configuration goes below.
gateway:replicaCount:"1"#Kubernetes pod replicas of episilia-gatewayimage:repository:episilia/gateway#Docker image of episilia-gatewaytag:*releaseresources:limits:cpu:500m#CPU limit on episilia-gatewaymemory:600Mi#Memory limit on episilia-gatewayrequests:cpu:300m#CPU request on episilia-gateway memory:300Mi#Memory request on episilia-gateway service:type:"ClusterIP"annotations:service.beta.kubernetes.io/aws-load-balancer-internal:"false"
Spike
Spike Server specific configuration goes below.
spike:JVM_MAX_SIZE:"1536m"JVM_NEWGEN_MIN_SIZE:"800m"JVM_NEWGEN_MAX_SIZE:"900m"metadata:backfill:forceupdate:"false"days:"0"s3logs:publish:seconds:"2"# time interval gap to fetch newly published files in s3 through fluentd ot other sources.partitionwise:"false"login:mode:local#Login through local, google, oktalocal:password:encryptionkey:"i am groot"google:clientid:"google-client"token:"google-token"okta:clientid:"okta-client"token:"okta-token"pulse:access:key:"token"token:"random"host:"pulse-url"url:"http://pulse-url:50051/"
spikeui:replicaCount:"1"#Kubernetes pod replicas of episilia-spike-uiimage:repository:episilia/spike-ui#Docker image of episilia-spike-uitag:*releaseresources:limits:cpu:500m#CPU limit on episilia-spike-uimemory:800Mi#Memory limit on episilia-spike-uirequests:cpu:300m#CPU request on episilia-spike-ui memory:500Mi#Memory request on episilia-spike-uiservice:type:"LoadBalancer"annotations:service.beta.kubernetes.io/aws-load-balancer-internal:"false"
Persistence Volume
If PV is enabled, configure the same below.
persistence:enabled:"true"storageClassName:gp2# storage class name (differs on the cloud services that are used)accessModes: - ReadWriteOnce# access modessize:"100Gi"# size of PVC which will be mounted to episilia-search for live searchhistoricSize:"100Gi"# size of PVC which will be mounted to episilia-search for historic searchondemandSize:"100Gi"# size of PVC which will be mounted to episilia-search for ondemand searchspikeSize:"10Gi"# size of PVC which will be mounted to episilia-spike for spike# annotations: {}finalizers: - kubernetes.io/pvc-protection# selectorLabels: {}# subPath: ""# existingClaim:```
flag -i or –install can be specified to run an install before if a release by this name doesn’t already exist. Otherwise, perform a rollback. If revision is not specified, the chart is rolled back to the previous version.