$ top
top - 07:44:05 up 33 days, 16:59, 1 user, load average: 0.06, 0.17, 0.65
Tasks: 298 total, 1 running, 287 sleeping, 0 stopped, 10 zombie
%Cpu(s): 1.3 us, 1.8 sy, 0.0 ni, 97.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0
MiB Mem : 32164.2 total, 19245.4 free, 7210.3 used, 5708.6 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 28163.7 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAN
3044818 root 20 0 7288100 743348 37508 S 2.0 2.3 10807:03 kubelet
1496378 xadm 20 0 2336832 392920 34856 S 0.0 1.2 7:28.62 mysqld
847 root 20 0 6305616 248856 26752 S 2.0 0.8 3886:47 crio
1494428 root 20 0 2187556 233064 45072 S 0.0 0.7 26:46.82 minio
ps -ef|grep 1496378
xadm 1496378 1496375 0 Dec16 ? 00:07:29 /opt/bitnami/mysql/bin/mysqld --defaults-file=/opt/bitnami/mysql/conf/my.cnf --basedir=/opt/bitnami/mysql --datadir=/bitnami/mysql/data --socket=/opt/bitnami/mysql/tmp/mysql.sock --pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
On constate que le PID 1496378 est un daemoin MySQL server, mais comment les relier à Kubernetes ?
#!/bin/bash
if [ -z "$1" ]
then
echo "Usage $0 [PID]"
exit
fi
if ! grep crio /proc/$1/cgroup &>/dev/null
then
echo PID: $1 is not container or PID not found
exit
fi
PODID=$(sudo head -1 /proc/$1/cgroup |awk -F"crio-" {'print $2'})
PODID=${PODID:0:13}
sudo crictl inspect $PODID | jq '.status.labels'
Utilisons notre script
./pidInfo.sh 1496378
{
"io.kubernetes.container.name": "mysql",
"io.kubernetes.pod.name": "x-mysql-0",
"io.kubernetes.pod.namespace": "x-portal",
"io.kubernetes.pod.uid": "50579273-e3f1-479b-a014-2d3a5d13aea7"
}
PS. crictl et jq doivent être installés
Fail2Ban is software that monitors system logs and blocks hackers after multiple failed logins. In addition, it protects servers against DoS, DDoS, dictionary, and brute-force attacks. Fail2ban bans the offending IP using the system firewall for a specific length of time. When the ban period expires, the IP address is removed from the ban list.
In this article, we will explain what Fail2Ban is and its use cases. We will also show you how to install and set up Fail2Ban and create a ban dashboard in Grafana. I will use the Ubuntu server, but you can choose another one.
Fail2ban installation
The Fail2ban package is included in the default Ubuntu repositories. To install it, enter the following command with the sudo privileges:
sudo apt update
sudo apt install fail2ban
It starts automatically after installation. To check the service status, use this command:
systemctl status fail2ban
The output will look like this:
Now allow service to start automatically at the next system restarts:
systemctl enable fail2ban
Fail2ban configuration
The fail2ban service keeps its configuration files in the /etc/fail2ban directory. There is a file with defaults called jail.conf.
Create a .local configuration file from the default jail.conf file:
cp /etc/fail2ban/jail.
To start configuring the Fail2ban server open, the jail.local
vim /etc/fail2ban/jail.local
Whitelist IP Addresses
IP addresses, IP ranges, or hosts that you want to exclude from banning can be added to the ignoreip directive. Here you should add your local PC IP address and all other machines that you want to whitelist.
Ban Settings
The values of bantime, findtime, and maxretry options define the ban time and ban conditions.
“Bantime” is the number of seconds that a host is banned. When no suffix is specified, it defaults to seconds. By default, the bantime value is set to 10 minutes. I prefer to set a longer ban time:
“Findtime” is the duration between the number of failures before a ban is set:
“Maxretry” is the number of failures before a host gets banned.
Fail2ban Jails
Fail2ban uses the concept of jails. A jail describes a service and includes filters and actions. Log entries matching the search pattern are counted, and when a predefined condition is met, the corresponding actions are executed.
The settings we discussed in the previous section, can be set per jail. Here is an example:
Each time you edit a configuration file, you need to restart the Fail2ban service for changes to take effect:
$ sudo systemctl restart fail2ban
Fail2ban-client
There is a Fail2ban client for managing its rules. Keep in mind that all changes made here will be reset after the system reboot or service restart. To view, active rules use this command:
sudo fail2ban-client status
Check the jail status:
sudo fail2ban-client status sshd
Unban an IP:
sudo fail2ban-client set sshd unbanip 103.195.150.43
Ban an IP:
sudo fail2ban-client set sshd banip 103.195.150.43
Fail2Ban Prometheus Exporter
(https://gitlab.com/hectorjsmith/fail2ban-prometheus-exporter). The exporter can be run as a standalone binary or a docker container.
I will use the docker container:
docker run -d
–name « fail2ban-exporter »
-v /var/run/fail2ban:/var/run/fail2ban:ro
-p « 9191:9191 »
registry.gitlab.com/hectorjsmith/fail2ban-prometheus-exporter:latest
All metric names are prefixed with f2b_
The metrics exported by this tool are compatible with Prometheus and Grafana.
Add this config to your prometheus.yml
A sample Grafana dashboard can be found in the grafana.json file. Just import the contents of this file into a new Grafana dashboard to get started.
Thank your for reading.
https://community.suitecrm.com/t/how-to-create-a-custom-module-from-scratch/36510
If you go to admin then click the module builder icon it’s a fairly simple process. It will ask you what you want to name your package and the key you want to use for it. A package can contain multiple modules or just one and the key is used at the start of all the database table names to try to prevent any of your table names conflicting with existing tables.
After that to create a new module just click the new module icon and then select what type of module you want. The different options come with different default fields. If none of the other options seem relevant to you then choose basic which is a minimal module.
You can add fields and edit the different views (what fields are on the edit screen, what fields should be on the list screen etc) in here. When you are happy with it save your changes then when you go back into module builder and select your package there is an option to deploy the package.
After you have deployed your module it should appear in the menu by default. If it doesn’t do a repair and rebuild then go into display modules and subpanels and check its not hidden.
After you have deployed your module you can still add fields and relationships to it in studio, but it is difficult to edit existing fields. I tend to deploy a module with minimal fields then use studio to create them afterwards for this reason.
You can add logic hooks and workflows to your module to add extra functionality in the same way you’d add them to any other module. The best guide to making advanced customisations to Suite is my colleague Jim’s book SuiteCRM for Developers which you can purchase here:
https://leanpub.com/suitecrmfordevelopers 310
We are currently working on improving the documentation for SuiteCRM, but the code is based on SugarCRM so most Sugar tutorials and documentation will also be relevant to suite.
We do have a userguide here that covers the main functionality (though doesn’t really go into customising Suite I’m afraid)
https://suitecrm.com/wiki/index.php/Userguide#What_is_in_the_User_Guide.3F
autotracer
Connexion
openssl s_client -connect outlook.office365.com:993 -crlf <-quiet>
Tests
. login <user> <password>
a select INBOX
b status INBOX (MESSAGES)
c fetch 1 all
d logout
HostAliases se contente d’injecter une valeur dans le fichier /etc/hosts .
pour modifier une entré via CordDNS, il faut éditer le configMap
$ kubectl edit configmap coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name orderer0.example.com orderer0-example-com.orbix-mvp.svc.cluster.local
rewrite name peer0.example.com peer0-example-com.orbix-mvp.svc.cluster.local
kubernetes cluster.local {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
}
La ligne qui nous intéresse est rewrite name orderer0.example.com orderer0-example-com.orbix-mvp.svc.cluster.local
Il reste à relancer CoreDNS
kubectl delete pod -n kube-system coredns-<xxxxx>
En plus des données par défaut, vous pouvez ajouter des entrées supplémentaires au fichier hosts. Par exemple : pour résoudre foo.local, bar.local en 127.0.0.1 et foo.remote, bar.remote en 10.1.2.3, vous pouvez configurer HostAliases pour un pod sous .spec.hostAliases :
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox:1.28
command:
- cat
args:
- "/etc/hosts"
Modifier
Se connecter sur le master, puis basculer sous root :
ssh ast-srv-096
su -
Lister les noeuds, puis désactiver le schéduling sur ce noeud
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1.k8s01.p10s.asten Ready control-plane 96d v1.25.3
worker01.k8s01.p10s.asten Ready <none> 92d v1.25.2
worker02 Ready <none> 34d v1.26.0
root@master1:~# kubectl cordon worker01.k8s01.p10s.asten
node/worker01.k8s01.p10s.asten cordoned
Faire les opération qui nécessitent un arret, penser à ré-activer le noeud
root@master1:~# kubectl uncordon worker01.k8s01.p10s.asten
node/worker01.k8s01.p10s.asten uncordoned
Un salarié IBM maintient des compilations de divers programmes pour PPC64le
Power Devops
Ajout rsct machines Ubuntu
Sous Ubuntu, il faut ajouter ce dépôt :
sudo add-apt-repository ppa:ibmpackages/rsct
sudo apt-get update
Reset de rsct
/usr/sbin/rsct/bin/rmcctrl -z
/usr/sbin/rsct/bin/rmcctrl -A
/usr/sbin/rsct/bin/rmcctrl -p
/usr/sbin/rsct/install/bin/recfgct
/usr/sbin/rsct/bin/rmcctrl -p
Pour compiler un programme go
go mod download
go build -ldflags="-extldflags=-static" -o bin/aptly
curl -XDELETE localhost:9200/_all
Nettoyer les jobs marqués ‘Completed’
kubectl get pod -A | grep Completed | awk '{print "kubectl delete pod -n " $1 " " $2}' | sh
Si un worker est cordon on peut vite se retrouver avec un longue liste de job, voici comment les supprimer
kubectl get job.batch -A | grep '0/1' | awk '{print "kubectl delete job.batch -n " $1 " " $2 }' | sh
Pour connaitre le nombre de Pods dans un état donné :
kubectl get pod -A --no-headers | awk '{print $4}' | sort | uniq -c | awk '{print $2 " : " $1}'
Completed : 99
ContainerCreating : 6
CrashLoopBackOff : 1
Init:0/2 : 12
Init:ImagePullBackOff : 12
Running : 250
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
On obtient la liste des images utilisées sur le cluster, avec le nombre d’utilisations :
5 docker-registry.mut.p10s.asten/argoproj/argocd:v2.4.12
2 docker-registry.mut.p10s.asten/calico/apiserver:v3.24.1
7 docker-registry.mut.p10s.asten/calico/csi:v3.24.1
1 docker-registry.mut.p10s.asten/calico/kube-controllers:v3.24.1
7 docker-registry.mut.p10s.asten/calico/node-driver-registrar:v3.24.1
On peut utiliser crane pour gérer les images. J’ai fait un conteneur avec crane :
podman run --rm -it docker-registry.p10s.asten/tools/crane:0.15.1
Une fois dans le conteneur : - Lister les repos : crane catalog --insecure docker-registry.p10s.asten
Sur le pod du registry, dans le conteneur registry, lancer un garbage-collect :
bin/registry garbage-collect --delete-untagged=true /etc/docker/registry/config.yml
Get-ADObject (Get-ADRootDSE).schemaNamingContext -Property objectVersion
Impossible de se connecter depuis la console :
pb-console-001.png
Analyse
Une analyse sur les pods renvoi
$ oc get pods -n openshift-console
$ oc get pod
NAME READY STATUS RESTARTS AGE
console-5f54bf4d68-5z7fj 0/1 Running 4 (2m37s ago) 16m
console-5f54bf4d68-rz4xl 0/1 Running 4 (3m3s ago) 16m
downloads-79fb7d56c9-v5fm9 1/1 Running 1 60d
downloads-79fb7d56c9-zb2qx 1/1 Running 1 60d
$ oc logs console-5f54bf4d68-5z7fj
E0627 09:26:05.260802 1 auth.go:231] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.sand-4b7b.asten.maq/oauth/token failed: Head « https://oauth-openshift.apps.sand-4b7b.asten.maq »: x509: certificate has expired or is not yet valid: current time 2022-06-27T09:26:05Z is after 2022-05-20T10:10:00Z
Résolution
Remplacer les certificats expirés
Copier le /root/.kube/config sur une machine master
$ scp /root/.kube/config core@master-0.sand-4b7b.asten.maq:/tmp/kubeconfig
$ ssh core@master-0.sand-4b7b.asten.maq
$ sudo -i
# export KUBECONFIG=/tmp/kubeconfig
# oc whoami
kube:admin
Re-déployer le certificat ingress
# oc delete cm custom-ca -n openshift-config
configmap "custom-ca" deleted
[root@master-0 ~]# oc create cm custom-ca --from-file=ca-bundle.crt=/etc/pki/tls/certs/ca-bundle.crt -n openshift-config
configmap/custom-ca created
Mettre à jour
$ oc patch proxy/cluster
–type=merge
–patch=‹ {« spec »:{« trustedCA »:{« name »:« custom-ca »}}} ›
Create a secret that contains the wildcard certificate chain and key: $ oc create secret tls
–cert=
–key=
-n openshift-ingress is the name of the secret that will contain the certificate chain and private key. is the path to the certificate chain on your local file system. is the path to the private key associated with this certificate.
Update the Ingress Controller configuration with the newly created secret: $ oc patch ingresscontroller.operator default
–type=merge -p
‘{“spec”:{“defaultCertificate”: {“name”: “”}}}’
-n openshift-ingress-operator Replace with the name used for the secret in the previous step.
Once the secret is patched, a rolling deployment will restart downed operators and re-enable console access + allow OC cli login.
Cause
OpenShift-ingress custom certificates are an integral part of the security chain with regards to logins and system operations. An expired ingress certificate cannot be bypassed with: --insecure-skip-tls-verify=true
Problème oc exec
Modifier
Message qui apparait :
oc exec -it – sh
error: unable to upgrade connection: Unauthorized
Recherche de diag
oc get nodes -o jsonpath=« {range .items[*]}{@.metadata.name}{‹ \t\t ›}{@.metadata.annotations.machineconfiguration.openshift.io/state}{‹ \n ›}{end} » | grep -v Done
master-0.sand-4b7b.asten.maq Degraded
master-1.sand-4b7b.asten.maq Degraded
master-2.sand-4b7b.asten.maq Degraded
worker-0.sand-4b7b.asten.maq Degraded
worker-1.sand-4b7b.asten.maq Degraded
Se connecter sur le(s) noeud(s)
ssh core@master-0.sand-4b7b.asten.maq
Red Hat Enterprise Linux CoreOS 48.84.202110262319-0
Part of OpenShift 4.8, RHCOS is a Kubernetes native operating system
managed by the Machine Config Operator (clusteroperator/machine-config).
WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via machineconfig objects:
https://docs.openshift.com/container-platform/4.8/architecture/architecture-rhcos.html
Last login: Thu Feb 10 10:17:11 2022 from 172.25.152.161
Puis tester
[core@master-0 ~]$ sudo bash
[root@master-0 core]# journalctl -u kubelet --no-pager | grep -i x509
Feb 09 03:03:37 master-0.sand-4b7b.asten.maq hyperkube[2080]: E0209 03:03:37.828374 2080 server.go:269] « Unable to authenticate the request due to an error » err=« [verifying certificate SN=36376146569637225864389036024789239918, SKID=, AKID=49:43:1C:BB:84:0B:55:F8:95:33:2D:7B:48:F8:80:B4:DB:76:78:75 failed: x509: certificate signed by unknown authority, Post « https://api-int.sand-4b7b.asten.maq:6443/apis/authentication.k8s.io/v1/tokenreviews »: EOF] »
S’il s’agit de cette erreur :
KCS 4773161 - Node degraded due to mode mismatch for file in Openshift 4 KCS 4550741 - Troubleshooting OpenShift Container Platform 4.x: machine-config operator KCS 4970731 - Node in degraded state because of the use of a deleted machineconfig
Définir le namespace par défaut avec K8s
Modifier
kubectl config set-context --current --namespace=
Avoir son propre Kubeconfig avec K3s
Modifier
Pour pouvoir travailler sans souci, on crée un utilisateur
adduser admin-
Il faut copier le kubeconfig
cp /etc/rancher/k3s/k3s.yaml /home/admin-/.kube/config
Puis ajouter une ligne dans le .profile
vi /home/admin-/.profile
…
export KUBECONFIG=~/.kube/config
Ouverture de port sur machine RHEL
Modifier
Sur les machines RedHat, le pare-feu ne s’appuie plus sur iptables. Un iptables -L n’affichera donc pas les règles actives.
C’est desormais firewalld qui gère le pare-feu.
sudo firewall-cmd --zone=public --add-port=/tcp --permanent
Utiliser des emoticons
Modifier
Voir ce site : emojipedia
Veeam
Modifier
De Anthony VACHON - ABICOM à tout le monde 04:18 PM https://vceplus.io/exam-vmce2021/ De Francis.Lesigne@techdata.com à tout le monde 04:19 PM https://bp.veeam.com/vbr
https://rasmushaslund.com (exam option1 2 3 )
De Anthony VACHON - ABICOM à tout le monde 04:18 PM https://vceplus.io/exam-vmce2021/ De Francis.Lesigne@techdata.com à tout le monde 04:19 PM https://bp.veeam.com/vbr
https://www.examtopics.com/exams/veeam/vmce2020/
De Francis.Lesigne@techdata.com à tout le monde 10:20 AM https://vdc2.as.techdata.com/guacamole/#/?username=c02u08&password=vas1102 De Francis.Lesigne@techdata.com à tout le monde 11:17 AM https://www.surveymonkey.com/r/VMCE11 De antony à tout le monde 02:23 PM on t’entends de notre cote De antony à tout le monde 03:07 PM https://helpcenter.veeam.com/docs/backup/vsphere/data_restore_in_direct_nfs_acc.html?ver=110
Gantlab
Install K3S https://metallb.universe.tf/installation/
Résoudre un souci sur un pod
Modifier
modifier le deployment
matchExpressions:
- key: object.component.model.app.asten/role
operator: In
values:
- core
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- '14400'
command:
- sleep
env:
- name: MARIADB_PASSWORD
valueFrom:
secretKeyRef:
key: PASSWORD
total 16
drwxrwsrwx 8 apache apache 4096 Sep 8 03:58 de_
drwxrwsrwx 11 apache apache 4096 Oct 18 02:04 dev
drwxrwsr-x 6 apache apache 4096 Oct 18 02:02 pro_
drwxrwsr-x 10 apache apache 4096 Oct 17 08:59 prod
~/var/cache $ du -sh *
26.0M de_
3.7G dev
9.6M pro_
45.3M prod
~/var/cache $ cd dev
~/var/cache/dev $ du -sh *
4.0K App_KernelDevDebugContainer.php
0 App_KernelDevDebugContainer.php.lock
300.0K App_KernelDevDebugContainer.php.meta
128.0K App_KernelDevDebugContainer.preload.php
1.5M App_KernelDevDebugContainer.xml
300.0K App_KernelDevDebugContainer.xml.meta
380.0K App_KernelDevDebugContainerCompiler.log
12.0K App_KernelDevDebugContainerDeprecations.log
6.5M Container8kXRODl
0 Container8kXRODl.legacy
6.5M ContainerJdAXz6Z
1.6M Symfony
4.0K annotations.map
8.0K annotations.php
912.0K doctrine
33.9M pools
3.6G profiler
4.0K serialization.php
3.2M translations
65.8M twig
148.0K url_generating_routes.php
28.0K url_generating_routes.php.meta
168.0K url_matching_routes.php
28.0K url_matching_routes.php.meta
8.0K validation.php
12.0K vich_uploader
112.0K webpack_encore.cache.php
~/var/cache/dev $ pwd
/var/www/html/var/cache/dev
~/var/cache/dev $