Thought this would be easier… Nextcloud / MariaDB

Iv’e got everything set up through Portainer. I have a container for Nextcloud and one for MariaDB. Each have a volume. Each are on the same network. I manually made the user/pass/database in the maria container for nextcloud to access. On the set up of Nextcloud on initial login I can not get connected to the database. What am I doing worng? I’ve tried container names, IPs, Host:port variations and I still can’t get connected. Any help would be appreciated. submitted by /u/ForSquirel [link] [comments]

CKA exam question

I just completed the CKA exam and finish everything except the last question. There’s a cluster with a master and worker. Running kubectl gives connection refused on port 6443. Ssh into master and logs show unable to init. Tried a lot of stuff but finally ran kubeadm init again which got master running. But on worker, even running join failed. Then I ran out of time. Has anyone else got this issue? What was the resolution? submitted by /u/aditseng [link] [comments]

Custom Helm

Greetings, I’m using the stable/prometheus helm chart and I’ve configured a custom values file that further configures alertmanager. I can install the chart with any issues w00t w00t, however, there’s one thing I’m not able to figure out. For the Slack Reciever/slack_configs/api_url I want to pass that through as an environment variable and not keep it hardcoded into file. I was thinking of adding something like the following to my helm install command: set.alertmanager.alertmanagerFiles.recievers.slack.apiurl=xxxx. I’m still reading through documentation but figured I post here to see if anyone has done this before :). helm install test-release stable/prometheus -f customALM.yml –set.alertmanager.enabled=true customALM.yml “` alertmanagerFiles: alertmanager.yml: route: group_wait: 10s group_interval: 5m repeat_interval: 30m receiver: “slack” routes: – receiver: “slack” group_wait: 10s match_re: severity: error|warning continue: true receivers: – name: “slack” slack_configs: – api_url: ‘[howDoIpassThisAsA_ENV_VAR?’ send_resolved: true channel: ‘monitoring’ text: “{{ range .Alerts }}<!channel> {{ .Annotations.summary }}n{{ .Annotations.description }}n{{ end }}” “` submitted by /u/PointManBX [link] [comments]

Node affinity rules for Service of type LoadBalancer(trying to preserve source IP)?

I’m still pretty new at Kubernetes, please be gentle.. For Background: I’m running a k3s cluster in Openstack VMs on top of Ubuntu 18.04, using Docker as container engine. All VMs are in a private network with a single floating IP pointing to the master (which is also a Kubernetes node) of the cluster. I want to preserve the source IP of all my http/https traffic until it reaches a single Traefik pod (deployed via Helm). I only have one ingress IP and a relatively small deployment with no HA planned. I have read through this and have successfully managed to set up my Traefik Helm deployment to preserve the Source IP on port 80/443. Unfortunately it seems to depend on how the Service LoadBalancer is deployed. Namely it works if my status.loadBalancer.ingress[{ip: ..}] coincides with the node I point the (floating) ingress IP to, and it doesn’t if the Service LoadBalancer picks another ingress. my traefik helm chart config: image.tag: “2.2.0” service: spec: externalTrafficPolicy: Local type: LoadBalancer nodeSelector: k3s.io/hostname: “master” # ensures the traefik pod runs on the floating IP node additionalArguments: – “–api.insecure=true” – “–accesslog=true” – “–log.level=DEBUG” Deployed with helm install -f traefik-helm-config.yml traefik traefik/traefik –namespace kube-system (Helm Repo: https://github.com/containous/traefik-helm-chart) Which results in the following Service: apiVersion: v1 kind: Service metadata: creationTimestamp: “2020-04-07T22:53:52Z” labels: app.kubernetes.io/instance: traefik app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: traefik helm.sh/chart: traefik-8.0.0 name: traefik namespace: kube-system resourceVersion: “1314694” selfLink: /api/v1/namespaces/kube-system/services/traefik uid: 915d4d03-d496-4635-a112-e7c6cdb14b69 spec: clusterIP: 10.43.245.157 externalTrafficPolicy: Local healthCheckNodePort: 30642 ports: – name: web nodePort: 31514 port: 80 protocol: TCP targetPort: web – name: websecure nodePort: 32517 port: 443 protocol: TCP targetPort: websecure selector: app.kubernetes.io/instance: traefik app.kubernetes.io/name: traefik sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: – ip: X.X.X.X # <– If this matches the master host IP everything works Is there any way to pin this loadBalancer Ingress IP? I have tried to use spec.externalIPs which doesn’t seem work. I assume I now have this: (taken from the kubernetes docs https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer) client | lb VIP / ^ v / health check —> (master) (any other worker) <— health check 200 <— ^ | —> 500 | V endpoint But if the LB grabs the wrong IP/Node it routes the traffic back and forth between the nodes before it reaches the LB? Is there a better way to go about this? As far as I can tell, I could do this with metalLB or lowering the NodeIP range down to 80, but it seems like if I could just pin this loadBalancer IP, I wouldn’t have to install extra things or “compromise” my NodeIP range. submitted by /u/Jawastew [link] [comments]

Worker too full of secrets

I was investigating today why the nodes in my (small) cluster were running out of storage. I was dumbstruck by the fact that, for each secret mounted into a pod, Kubernetes creates a storage device in the node with the size of 1.9G. This even though the secrets are close to nothing in terms of size!! I guess this is Kubernetes being “cautious” as to what is going to go inside the secret, but I do know they won’t get bigger than a few K. Is there any kind of configuration so that Kubernetes creates these volumes with a much smaller footprint? As an example, this is what I see in my node: https://preview.redd.it/ke50os9qzgr41.png?width=2281&format=png&auto=webp&s=f862b69aed7d253d6f00876dd7d6daa40fc6d86e submitted by /u/dashcubeit [link] [comments]

1st – Setting up a bittorrent with VPN on docker

Hi! I previously set up a Plex server on a VM server with various Windows and Linux machines running and that worked amazingly, however it was not very efficient and there was a lot of overhead. Therefore I recently decided to upgrade my hardware and at the same time set up a Plex server with docker, but there has been quite a lot of roadblocks that I’ve faced. I am currently running a server with Radarr / Sonarr and transmission however this is barely doing it’s job and I’ve decided I want to start from scratch and do it the right way to begin with. DISCLAIMER: I am a complete rookie when it comes to Docker, my skillset in this department consist of having some basic Linux understanding and being very presistent & eager to learn. Please keep this in mind when composing suggestions. ​ The first thing I need in order to proceed before adding automation and so forth is a proper bittorrent client. I have attempted setting up Transmission with PIA VPN however this has failed catastrophically every time. I have read quite a bit of documentation and guides and I was unable to get it working. The reason I chose Transmission is due to it being the most used (by my understanding) client for torrenting in docker with Sonarr / Radarr and VPN (more documentation, right). qBittorrent is preferred seeing as I have previous experience with it, however I am willing to try anything. The same goes for PIA; I am willing to use a different VPN provider if necessary. ​ What I am hoping is that someone can share their own personal setup and experiences. It would be greatly appreciated if this is followed up by a little guide which explains what needs to be done to get the same results you do. If any additional necessary information is needed to understand my situation I am happy to share all the details! submitted by /u/I_AM_NIKOLAI [link] [comments]

Newbie question on Docker workflow

Hello, I am trying to wrap my head around how to use Docker. I have access to a server machine with GPU and a docker container which contains all the python packages I need for my work. Let’s say I write some code locally and I want to run it on the server machine. What is the best way of doing that? I can think of one workflow: ssh into the machine, scp the new script file from my local machine to the server (or git pull if it’s committed), spin the docker, copy the new script into the docker container, and execute it. This doesn’t sound ideal. Is there a better way? submitted by /u/vaaal88 [link] [comments]

Making sense of Kubernetes / OpenShift in general

Short background on myself, I’m a sys admin of hpc clusters. I don’t really write code apart from Ansible and shell scripts for some of those automation tasks. I usually deal with baremetal boxes that runs MPI workload over Infiniband or KVMs that runs services like batch scheduler and directory services. With containers, multicloud, cloud bursting, orchestration, Kubernetes & OpenShift etc all being pushed into my face everyday, I started looking whether I can use these to streamline / make system to be more responsive to software stack / workload changes. So I’m checking if these are the right tools to achieve the following: Container image based slurmd compute node (re)deployment & auto scaling between tenants within the same pool of nodes (atomic host on baremetal?) Docker container based service nodes inheriting some sort of high availability / resilience property of Kubernetes: slurmctld, slurmdbd, directory services, sshd login node, crond scheduler node, rsyslogd, elasticsearch Are these suitable workload to put on to container? Is OKD / Kuberbetes the right tool to orchestrate such containers? I’d prefer a simple flat-bridged network here as security is of little concern among the sites I manage and that’s also what the users expects, but as I’m aware it seems not a (common) option? And as far as I know, I’ll need separate persistent storage which is also highly available, what are the common options here? Kernel NFS server? Ganesha NFS provided by some of the nodes in the Kubernetes clusters? Something else I haven’t mention? If I need high availability of the controller nodes, that 3 nodes minimum for me? I can also place docker compute workload on those or is it highly not recommended? How difficult is it to upgrade the Kubernetes cluster or it entirely not worth the admin effort on installing a Kebernetes cluster for such workload? I’m sorry if I sound too noob, but every time I tried experimenting with implementing these system aspects of the cluster, I always feel overwhelmed (having to dedicate / configure minimum half a dozen nodes and hundred of pages of manual) and that never got me very far. I hope I could get some feed back to determine if sinking a considerable amount of employer’s time would proof any use to the above usage. submitted by /u/shyouko [link] [comments]

Turn-key home network docker image?

Hey all, I was hoping there was a turn-key docker image I could install on my synology to use as a home network monitoring tool? Like LibreNMS, Observium or Zabbix? Ideally in one docker container. submitted by /u/xStimorolx [link] [comments]

How to mount NFS Share in docker container

I have Sonarr running in a docker on Ubuntu. In order to see my media collection hosted on a NAS il need to map the share somehow, any clues how i can do that? Seem to be different methods, not sure whats best. Complete noob to Docker at this point submitted by /u/t0ms88 [link] [comments]

How DevSecOps Helps the U.S. Federal Government Achieve Continuous ATO

CloudBees sponsored this post.

Michael Wright
Michael is Director, Public Sector at CloudBees. He is a software sales professional with over 20 years of experience selling enterprise IT solutions and services.

Information security is at the heart of every software system launched in the U.S. federal government. In accordance with the Federal Information Security Management Act (FISMA), an information technology system is granted an Authority to Operate (ATO) after passing a risk-based cybersecurity assessment. While necessary, the ATO process can pose challenges to the software development process as it requires an authorizing official (AO) to pre-approve systems against a set of risk controls before putting the systems into operation.
An ATO is typically valid for three years, based on the assumption that the system’s cybersecurity posture won’t change significantly during that period. This assumption of relative stasis is often unrealistic because of modern development practices, which facilitate and embrace change. As changes are (inevitably) made, the “set it and forget it” ATO becomes inadequate. As a result, the need to reassess and reauthorize the system negatively impacts the overall cost and schedule of delivering it to the end-users.
The federal government’s guidelines for system assessment and authorization, laid out in the Risk Management Framework (RMF), suggests an alternative approach to the traditional once-every-three-years ATO process through continuous reauthorization. Created by the National Institute of Standards and Technology (NIST), the RMF offers a structured process to integrate information security and risk management activities into the system development lifecycle. The RMF continuous reauthorization concept seamlessly aligns with core tenets of DevOps and subsequently paves the way for DevSecOps.

Sponsor Note

CloudBees is powering the continuous economy by offering an end-to-end continuous software delivery management (SDM) system. CloudBees is the CI, CD and application release orchestration (ARO) powerhouse, built on the commercial success of its products and open source leadership.

DevSecOps incorporates security activities into the software development lifecycle (SDLC). The approach helps develop teams seamlessly integrate security and compliance functions into their regular workflows, by inserting the necessary steps into the continuous integration/continuous delivery (CI/CD) pipeline.
Let’s explore what this reality looks like for federal agencies.
Collaboration and Trustworthy Pipelines
Communication is an extremely effective catalyst for creating a dynamic development process and facilitating cross-functional collaboration between the development, security and operations teams. It is critical to involve security and operations teams in the initial, as well as routine, pipeline-related discussions to ensure that the right security steps are built into the pipeline(s).
Consistent conversations between the teams help to build and reinforce trust in the pipeline(s) used in the software factory. Laying out the security requirements early in the software development process helps developers weave the necessary security protections into their workflows and establish delivery pipelines that security teams trust.
Automated Pipelines Reduce Errors
DevSecOps enables organizations to automate processes, which in turn minimizes defects due to human error. Government agencies deliver higher performing and better quality applications by embedding automated security controls and tests into their pipelines. They also avoid bottlenecks and deliver capabilities faster, by automating the tasks and approval gates that really don’t need a human in the loop.
Secure and Faster Delivery
In many circles, security is still seen as an impediment to software delivery, a blocker that is both maddening and inevitable. However, the truth is that security can actually promote speed.
DevSecOps, when properly executed, accelerates software delivery because security checks are completed and defects are corrected continuously throughout the SDLC, as opposed to in one hulking lump after the system has been developed.
This is good news for developers (their creations make it into operations sooner), for security teams (they can more easily authorize the system because they know it has been developed in compliance with organizational policy) and for operations teams (they inherit a system that is worthy of running on their network).
Continuous Insight and Real-Time Compliance
Security teams gain transparency and useful insights at every step of the SDLC through an audit trail, governance measures and real-time compliance reporting without waiting for developers to generate and share reports post-development.
Those continuous insights, coupled with the confidence of trustworthy pipelines, eliminate wasteful back-and-forth between the security office and development teams and result in much quicker ATO sign-off.
Conclusion
The ATO decision is crucial for federal government agencies, as it signals that an IT system is safe to deploy “in the wild.” Implementing DevSecOps is a simple yet elegant way to push continuous security to the left and reduce the vulnerabilities being released into the production environment.
Federal agencies benefit greatly by reducing defects and gaining the trust that the relevant security checks have been executed and the code is always worthy of release.
Feature image from Pixabay.
The post How DevSecOps Helps the U.S. Federal Government Achieve Continuous ATO appeared first on The New Stack.

Programmatically scaling up /scaling down pods

We are building a Java based service (A). There is a default number of kubernets pods that is always maintained for hosting the service (A). However there is batch process which is a different Java service (B) upfront knows the volume of requests that are going to hit service A . Can I build the service (B) in such a way that it can programmatically scaling up /scaling down service A .?. Instead of relying on k8s to bring up/down pods can I override it by code ? I have limited knowledge on Kubernets. Heard bit about Operators in k8s world. Is that intended for solving this kind of problems ? submitted by /u/shamseer81 [link] [comments]

How to Protect Your Virtual Meetings from Zoombombing

Imagine, if you will, you’re participating in a Zoom meeting and, out of nowhere, a participant starts shouting epithets, displaying offensive content, and just generally disrupting your meeting. I don’t know about you, but to me, that sounds like something straight out of a college prankster’s handbook.
Although that might be true, it’s also happening now with Zoom meetings. This trend is called “Zoombombing” and it’s become quite the rage. In fact, new internet communities have started popping up where users go to share Zoom conference codes and request others to connect with the meetings and hurl insults, play pornographic material, and even make death threats against meeting attendees.
This issue has become so rampant that Zoom CEO Eric Yuan has put a freeze on feature updates, in order to address the security issues. Zoom’s promise was to address the problem within the next 90 days, when Yuan said, “Over the next 90 days, we are committed to dedicating the resources needed to better identify, address, and fix issues proactively. We are also committed to being transparent throughout this process. We want to do what it takes to maintain your trust.”
Another writer for The New Stack, Jennifer Riggins, experienced this phenomenon first hand. She started doing remote dance parties five years ago and recently, because of the current situation, revised the event. She recently created a Zoom meeting and invited some women in tech and parenting communities she’s a part of. She used a Save the Date tool to create a fun, euphoric activity.
As soon as she opened the event, however, she was overwhelmed and confused. Flooding from her computer speakers were so many songs and noises at once. She couldn’t get control of the screen share (even though it was her event) which was displaying a constant barrage of Nazi paraphernalia, porn searches, and mockery of the disabled. To avoid the onslaught, she had one choice — end the meeting.
Zoom promised to secure its web-based video conferencing at the beginning of April. In this duration, however, countless of  Zoom meetings will have taken place — some of which might include discussing sensitive company information. The dramatic increase in numbers is due primarily to the COVID-19 pandemic and the global “stay at home” orders being handed down by leaders on various levels. That means, until Zoom arrives at a solution, every meeting you host or attend runs the risk of being Zoombombed.
What do you do?
Obviously, you could use a different platform for your teleconferencing needs. For instance, you could always migrate to the open source Nextcloud Hub and use their built-in Talk feature. Another open source alternative is Discord.
Although an alternative might seem appealing, you will find yourself having to use Zoom at some point. It is, after all, one of the most widely-used teleconferencing platforms on the market. So when your hand is forced, what can you do to prevent Zoombombing?
In some instances, not much. If the information for your meeting escapes into the wild, there’s little you can do to prevent ne’er do wells from accessing your event.
However, if you manage your meeting carefully, you can mitigate Zoombombing as much as possible at the moment.
Let me show you what can be done.
Manage your Attendees
The first thing you need to do is keep control over your attendees. If your meeting is small enough, this is simple—chances are you’ll know everyone logged in. If the meeting is large, however, you should take at least one step to prevent bad actors.
When you set up a meeting, there is a configuration option that allows you to mute all participants upon entry. This means they can view the meeting, but not speak. This will at least prevent them from vocally disrupting. To set this option, start your meeting and then click the Manage Participants button. In the resulting window (Figure 1), click the More drop-down and then click the checkbox for Mute participants on entry.
Figure 1: Muting participants upon entry.
You can also click the Mute All button the Participants management window. To make this actually effective, you’ll want to uncheck the box for Allow participants to unmute themselves (Figure 2).
Figure 2: If you don’t uncheck this box, anyone can unmute themselves and disrupt your meeting.
But what happens if an attendee starts sharing questionable or vulgar content via images? You can always remove them. To do that, locate the attendee in question in the Participant Management window click their entry, click the More button, and then click Remove (Figure 3).
Figure 3: Removing an attendee from a meeting.
Another way to manage attendees is by way of Waiting Rooms. Because of the rise of Zoombombing, the company announced in a tweet that it is enabling Waiting Rooms by default. What are Waiting Rooms? Simple. When an attendee enters a meeting, they are sequestered into a room separate from the actual meeting. Those attendees wait in that room until an organizer allows them in. This is an easy way to prevent unwanted users from showing up and wreaking havoc.
Locking Down a Meeting
The best thing you can do to secure your meetings is to lock them down. Once a meeting is locked, no new attendees can join. If you opt to go this route, you’ll want to first make sure all in attendance should actually be there. If this is with users you do not know, you can always email them a code and have them share their unique code with you once in the meeting. After you’ve verified everyone in attendance should actually be there, open the Participant Management window, click More (bottom right corner) and then click Lock Meeting. You’ll be prompted to verify the locking of the meeting (Figure 4).
Figure 4: Locking a Zoom meeting.
Once you’ve locked a meeting, you can unlock it by clicking the More button again and clicking Unlock Meeting.
Set a Meeting Password
Zoom also allows you to set a meeting password. You can use this feature for both instant and scheduled meetings, but you must configure it from the web-based portal. Log into your Zoom account and click your profile icon (upper right corner) and click your email address. In the resultant window, click Settings and then scroll down until you see the entries for Require a password when scheduling a new meeting and Require a password for instant meetings. Click the On/Off sliders until they are in the On position (Figure 5).
Figure 5: Enabling passwords for meetings.
To be even more safe, uncheck the option for Embed password in meeting link for one-click join. This means users will have to manually type the meeting password, but it’s better to be safe than convenient.
When you then create an event, and you go to invite people to the meetings, you’ll see the meeting password in the bottom right corner of the invite window.
The one caveat to passwords (and this is a big caveat) is that when you set up scheduled meetings, that password is sent out, in plain text, in the meeting invites. So your best bet, until Zoom gets this issue ironed out, is to only create instant meetings and then share the information out in a more secure manner (such as sending meeting IDs and passwords in separate or encrypted emails).
Nothing is 100% and Zoom meetings are far from it. But with a bit of care and caution, you can avoid getting Zoombombed. These suggestions aren’t foolproof, by any stretch of the imagination, but they are exponentially better than doing nothing.
Feature image by OpenClipart-Vectors from Pixabay.
The post How to Protect Your Virtual Meetings from Zoombombing appeared first on The New Stack.

Authorised endpoints not working for clusters deployed with Rancher UI 2.4.0

Hi! I created two clusters using the Rancher 2.4.0 UI. With both clusters, I cannot connect directly to the masters using the specific contexts in the kubeconfig I download from the UI. I can connect with the Rancher-proxy context just fine, but with the direct contexts I always get this: error: You must be logged in to the server (Unauthorized) What am I missing? Has something changed in 2.4.0? I don’t have any problems with a cluster created with 2.3.5. The fact that I am having the same issue with two new clusters can’t be a coincidence. Can any Rancher user either try connecting directly to a master of a cluster deployed with 2.4.0 if you don’t have one, maybe create a throwaway cluster to give this a try? Thanks in advance… hopefully I am not asking too much. Any help would be much appreciated. submitted by /u/Sky_Linx [link] [comments]

How High-Performance Teams Cultivate a Culture of Early and Meaningful Feedback

Raygun sponsored this post.

Freyja Spaven
Freyja writes for Raygun.com, the performance monitoring suite that enables you to build stronger, faster and more resilient web and mobile applications for your customers.

Monitoring and optimizing performance, in addition to staying on top of customer issues, is essential if an organization is to survive in this fast-paced world. That’s often harder than it sounds. Picking the right metrics to track is a challenge, as is creating the right team culture.
Raygun recently brought four tech leaders together to share their experiences. They talked about how software teams can build better products by focusing on the metrics that matter the most, and cultivating a culture that values early feedback.
Our panelists were:
John-Daniel Trask, CEO and co-founder of Raygun
Diana Kumar, senior director of product development at Tableau Software
Rory Richardson, head of business development, serverless and application integration at Amazon Web Services
Doug Rathbone, software development manager at Amazon Alexa
Here are the main insights from the panel:
1. Engineers Should Get Closer to Customers
All of the panelists believed that developers should get first-hand experience with customers.

Sponsor Note

Detect and diagnose errors, crashes and performance issues with greater speed and accuracy. Raygun provides full stack application monitoring for software teams. Now you can enjoy complete visibility into software health and poor end-user experiences, all in one place.

Rathbone from Amazon Alexa greatly values diverse feedback and being able to quickly act on it, citing the feedback mechanism in the AWS console as an example.
Richardson from AWS uses an interesting technique in order to get software engineers out of their comfort zones, to encourage them to come up with innovative solutions. She works backward with the customer to create a document called PR FAQ, which is an imaginary press release about potential future features — including hypothetical questions asked about those features.
“One of the inspirations for how you get to the PR FAQ is talking directly to customers,” she said.
Diana from Tableau Software added that “dogfooding” — that is, making developers use the tools they create — is very effective in connecting them to the product.
Trask from application performance monitoring software, Raygun, reiterated the importance of having great feedback mechanisms in place.
2. Focusing on the Right Quality Metrics Is Essential
When it comes to the challenge of measuring quality incrementally when there are big leaps in product development, Richardson said this is a judgment call teams have to make. It will depend on the business domain and the market opportunity.
Rathbone thinks the way to incrementally measure quality varies a lot, depending on team and company culture. When it comes to team maturity, he sees it as a hierarchy of needs that need to be measured both on the client and the product side.
“Making sure your systems are up is the first step on the operational side,” he continued. “And then, product wise, you measure incremental improvements. With AWS and partners like Raygun, you get a lot of out-of-the-box that enables you to move up the stack. Coming back to the hierarchy of needs, what you want here is self-actualization — where your people aren’t getting paged in their sleep, you’re growing, and you have product-market fit.”
Rathbone added that his team at Alexa tracks a metric called “perceived latency,” which is different than just plain latency in that it measures how the users perceive latency. Measuring this helps developers bridge engineering and product, solving latency problems while also providing a better user experience.
Raygun’s Trask had some advice for smaller businesses. They should wait until they have statistical significance in their customer feedback before acting. Not all feedback is equally meaningful or valuable, he cautioned, and organizations should prioritize looking for meaningful feedback. That’s why Raygun removed NPS (Net Promoter Score) in favor of more intimate feedback methods.
3. More Smoke Signals, Less Fire Drills
One common theme the panelists discussed was how to stop simply responding to fires. Instead, teams should focus on preventing fires from starting in the first place.
“If you’re always responding to fire drills and things that are going wrong with customers, then your timeline is wrong,” said Richardson. “You need to respond to smoke signals, not alarm fires. You get that by changing your timeline to collect data earlier in the sales cycle, so that problems don’t keep happening.”
Rathbone agreed that being customer-obsessed is essential. However, he added that teams need to have the right levels of support and autonomy from their managers. Only then will they be able to respond to issues more proactively.
4. Nurturing the Right Team Culture Is Vital
All of the panelists agreed that team culture is essential. Team members need to have autonomy in order to build better products. Also, they agreed that organizations must foster a culture where value is the ultimate focus.
When failure does happen, teams should embrace it and learn from it — instead of looking for someone to blame. Richardson shared that the “embrace failure” mindset is one of the hardest challenges she faces with new team members. Since most organizations punish failure, the automatic tendency is to hide it.
“We don’t promote for failure,” she said, “but we iterate on it. So failure is no longer a negative thing culturally. We encourage people to expose their failures earlier, which creates an environment where everyone feels safer.”
The panelists also rely on software tools to shorten feedback cycles. Rathbone said that his team at Alexa employs an internal application called Connections, which they use to provide weekly feedback on their team’s health. Similarly, employees at Raygun use a tool called 15five to raise issues with their managers.
Final Considerations
Designing high-performance teams that deliver great products isn’t easy. However, we can stand on the shoulders of giants by learning from successful companies. From this panel, we learned that constant and meaningful feedback is key to maximize employee happiness, customer satisfaction, and product quality.
Watch the full panel discussion:

Amazon Web Services is a sponsor of The New Stack.

The post How High-Performance Teams Cultivate a Culture of Early and Meaningful Feedback appeared first on The New Stack.

Call WebApi in another container from Angular Container

Docker File: stage 1 FROM node:latest as node WORKDIR /usr/src/app COPY package.json /usr/src/app COPY proxy.conf.json /usr/src/app RUN npm install COPY . /usr/src/app CMD ng serve –host 0.0.0.0 RUN npm run build ENTRYPOINT npm proxy Stage 2 FROM nginx:1.13.12-alpine COPY –from=node /usr/src/app/dist/miniRws /usr/share/nginx/html COPY ./nginx.conf /etc/nginx/conf.d/default.conf CMD [“npm”, “proxy”] Nginx.cong server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html =404; } } part of package Json { “name”: “mini-rws”, “version”: “0.0.0”, “scripts”: { “ng”: “ng”, “start”: “ng serve –host 0.0.0.0”, “proxy”: “ng serve –proxy-config proxy.conf.json –host 0.0.0.0”, “build”: “ng build”, My understanding is I need to pass the Server API via “proxy_pass” in the nginx.conf somehow. Can somebody please help edit the nginx.config and docker file to make it work submitted by /u/Homersbm [link] [comments]

Dockerfile ‘touch’ command not creating file in container. Also can’t GIT CLONE.

Hello all, I am trying to create a Dockerfile to use for WordPress development; Ideally, I’d want this docker container to contain a starter theme, a few plugins, etc. This is a sample of what I’m trying to do: FROM wordpress:latest RUN apt-get update && apt-get upgrade -y && apt-get install -y git RUN cd /var/www/html/wp-content/themes/ && git clone https://github.com/Automattic/_s.git However, when I build the image I get the following error: /bin/sh: 1: cd: can’t cd to /var/www/html/wp-content/themes/ In an effort to debug, I commented out the above line, and then tried to mkdir and touch several test directories and files. These commands all run successfully: RUN touch /var/www/html/testfile.txt RUN sh -c ‘touch /var/www/html/testfile.txt’ However, when I exec into the running container, I don’t see testfile.txt (and can’t find it either). Am I doing something wrong? ​ Lastly, am I thinking about this WordPress workflow wrong? I use the same starter theme on most sites I build, is it unreasonable to expect to make a docker image I can spin up that contains WordPress with the plugins that I always use already copied into the themes/plugins directories?I am still getting my head around a good CI/CD WordPress workflow. submitted by /u/js_novice [link] [comments]

8,000 More Reasons to Run Open Source as a Managed Service

Amazon Web Services (AWS) sponsored this post.

Matt Asay
Matt has been involved in open source and all that it enables (cloud, machine learning, data infrastructure, mobile, etc.) for nearly two decades, working for a variety of open source companies and writing regularly for InfoWorld and TechRepublic. You can follow him on Twitter (@mjasay).

Trend Micro recently reported that “8,000 Redis instances […] are running unsecured in different parts of the world, even ones deployed in public clouds.” But that’s not the real story. Trend Micro is careful to point to official Redis documentation which stresses that “Redis is designed to be accessed by trusted clients inside trusted environments.” So it’s not a good idea to leave such servers directly connected to the Internet, nor to “an environment where untrusted clients can directly access the Redis TCP port or UNIX socket.”
Yet clearly these best practices sometimes go unheeded. Across the broad array of open source software, stories routinely surface about security breaches caused by unsecured software — usually caused by misconfigured permissions. This isn’t because the software itself is inherently insecure, nor is it because associated vendors aren’t smart with security. It’s because we humans are not always very good about applying correct configurations or investing the effort to secure our software, open source or otherwise.
But that’s OK, because there’s an incredibly easy solution to TCP port-compromised Redis servers and countless other security issues in open source software: run a fully managed service.

Sponsor Note

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully-featured services from data centers globally. Millions of customers are using AWS to lower costs, become more agile, and innovate faster.

Push the Easy Button
But first, let’s be clear: Redis has never been more secure. Commenting on the Trend Micro report, Redis Labs’ Itamar Haber pointed out that the “[8,000] number is substantial, but it demonstrates a constant, if not declining, number of exposed servers overall.” Each Redis release has resulted in a dwindling number of unsecured Redis servers, with the 4.0 release introducing protected mode — which has dramatically improved the default Redis security.
This is fantastic but it’s arguably not even the most significant set of security improvements for Redis. As Haber goes on to suggest, “managed service provided by Redis Labs is secure out of the box and eliminates the need for users to figure out their own security practices.”
Yes, that’s right. The easiest answer to Redis security issues, such as TLS not being on by default or no password being set, is just two words long: managed service. The Redis Labs service, for example, fixes these problems out of the box.
If you’re Redis-inclined (and you probably are — the official Redis Docker Hub image has registered more than 1 billion downloads to date), you’re spoiled for choice. Redis Labs offers a fully managed Redis service. So do Aiven, DigitalOcean, and others — including Amazon Web Services (AWS). Does this mean I’m biased? Of course. After all, I work for AWS and we offer managed services for Redis (Amazon Elasticache), Apache Kafka (Amazon MSK), and other open source projects. Would I love for you to use them? Yep.
But I’m actually more concerned that you use something — anything — that keeps your data secure.
As mentioned, this isn’t just a Redis thing. Perhaps you’ve read about MongoDB security issues, like those reported in Naked Security. MongoDB has offered the ability to close off remote access since version 2.6, and has turned this on by default since version 3.6. Does this mean MongoDB will not get hacked? No, because the company (correctly, in my view) tries to balance user freedom with security. As a MongoDB spokesperson told Naked Security, “we believe setting localhost by default puts users in a mode where they have to make a conscious decision about their own appropriate path to network safety.”
Or, you know, developers could run Atlas, MongoDB’s managed service. Problem: solved.
Open Source Doesn’t (Have to) Mean ‘Open Door’
Freedom is the bedrock foundation of open source software. From the earliest days of (free and) open source software, the most basic rights have been: “First, the freedom to copy a program and redistribute it to your neighbors, so that they can use it as well as you. Second, the freedom to change a program, so that you can control it instead of it controlling you…”. Later this was articulated by Richard Stallman as “the Four Freedoms,” and a separate group of developers articulated the Open Source Definition (OSD), which outlined its own essential set of freedoms.
While neither the Four Freedoms nor the OSD established a “right to misconfigure open source to make it insecure,” that “freedom” has been inadvertently embraced by far too many developers.
Within open source — and, yes, also within AWS — a core tenet is to allow developers the flexibility to change our default configurations to suit whatever style of application they’re constructing. As is the case with running software on-premises or anywhere else, when a new access control configuration is set, developers should ensure that it protects access the way that they intended. Clearly, however, many developers don’t do this, unwittingly opening the door to security breaches.
So let’s make this simple: run open source as a managed service and stop worrying about patching, configuration errors, etc. Whatever the open source software — be it Apache Kafka, Redis, MySQL, or many, many others — odds are good that you can get it as a managed service. Odds are also good that, when you do, you won’t have to worry about headlines like “More Than 8,000 Unsecured Redis Instances Found in the Cloud,” because yours won’t be among them.
MongoDB and Redis are sponsors of The New Stack.
Feature image via Pixabay.
The post 8,000 More Reasons to Run Open Source as a Managed Service appeared first on The New Stack.

Ensure pods with the same label are scheduled on different nodes

Hi, is there a way to force pods with the same label to be scheduled on different nodes? I am trying with the following but sometimes pods are still scheduled on the same node, even though there is a node e.g. with no pods with that label yet. Thanks in advance affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: – labelSelector: matchExpressions: – key: “app.kubernetes.io/name” operator: In values: – hcloud-fip-controller topologyKey: “kubernetes.io/hostname” Edit: forgot to mention that these are pods in different deployment of the same thing (to assign each floating IP in Hetzner Cloud on a different node). Edit 2: Specifying all the namespaces in the anti affinity seems to have fixed it. submitted by /u/Sky_Linx [link] [comments]

Unable to connect with MySQL DB running on a docker container through another container

Hello everyone, I was trying to connect MySQL DB running on a docker container through another container, however I am getting this error : “` testapp1 | Traceback (most recent call last): testapp_1 | File “/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py”, line 216, in _open_connection testapp_1 | self._cmysql.connect(**cnx_kwargs) testapp_1 | _mysql_connector.MySQLInterfaceError: Can’t connect to MySQL server on ‘mysql’ (111) testapp_1 | testapp_1 | During handling of the above exception, another exception occurred: testapp_1 | testapp_1 | Traceback (most recent call last): testapp_1 | File “App/pycheck.py”, line 6, in <module> testapp_1 | db=”persist” testapp_1 | File “/usr/local/lib/python3.6/site-packages/mysql/connector/init.py”, line 218, in connect testapp_1 | return CMySQLConnection(args, *kwargs) testapp_1 | File “/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py”, line 80, in __init_ testapp_1 | self.connect(**kwargs) testapp_1 | File “/usr/local/lib/python3.6/site-packages/mysql/connector/abstracts.py”, line 960, in connect testapp_1 | self._open_connection() testapp_1 | File “/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py”, line 219, in _open_connection testapp_1 | sqlstate=exc.sqlstate) testapp_1 | mysql.connector.errors.DatabaseError: 2003 (HY000): Can’t connect to MySQL server on ‘mysql’ (111) “` I have tried almost everything. It works fine when the MySQL DB is on a container and I access it through local, however it doesn’t work when I access through another container. Here is the compose file I am using: “` version: ‘3’ services: mysql: image: mysql:latest container_name: mysql environment: MYSQL_USER: user MYSQL_ROOT_PASSWORD: helloworld MYSQL_DATABASE: persist ports: – “3306:3306” admin: image: adminer ports: – “8080:8080” testapp: build: context: . dockerfile: Dockerfile command: python App/pycheck.py ports: – “8001:8001″ “` Here is the python application I am using to access the database: “` import mysql.connector cnx = mysql.connector.connect( host=”mysql”, user=”user”, passwd=”helloworld”, db=”persist” ) cursor = cnx.cursor() cursor.execute(“CREATE TABLE best_table (Name VARCHAR(255))”) add_employee = (“INSERT INTO best_table ” “(Name) ” “VALUES (%s)”) data_employee = (‘Farrukh’) Insert new employee cursor.execute(add_employee, (data_employee,)) ata_employee = (‘Rehan’) cursor.execute(add_employee, (data_employee,)) Make sure data is committed to the database cnx.commit() cursor.close() cnx.close() “` Here is the Dockerfile mentioned in the compose file for the Python application : “` FROM python:3.6.7 ADD . /App WORKDIR /App RUN pip install -r App/requirements.txt “` Any help is greatly appreciated! submitted by /u/srl9 [link] [comments]

Announcing the Compose Specification

Docker is pleased to announce that we have created a new open community to develop the Compose Specification. This new community will be run with open governance with input from all interested parties allowing us together to create a new standard for defining multi-container apps that can be run from the desktop to the cloud. 

Docker is working with Amazon Web Services (AWS), Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms like Kubernetes and Amazon Elastic Container Service (Amazon ECS) in addition to the existing Compose platforms. Opening the specification will allow innovation to flourish and deliver more choices to developers, accelerating how development teams build and ship applications.

Currently used by millions of developers and with over 650,000 Compose files on GitHub, Compose has been widely embraced by developers because it is a simple cloud and platform-agnostic way of defining multi-container based applications. Compose dramatically simplifies the code to cloud process and toolchain for developers by allowing them to define a complex stack in a single file and run it with a single command. This eliminates the need to build and start every container manually, saving development teams valuable time.

Previously Compose did not have a published specification, and was tied to the implementation, and to specifics of the platforms it shipped on. Open governance will benefit the wider community of new and existing users with transparency and the ability to have input into the future direction of the specification and Compose based tools. With greater community support and engagement, Docker intends to submit the Compose Specification to an open source foundation to further enhance the level playing field and openness.

If you want to get started using Docker Compose today to try the existing features you can download Docker Desktop with Docker Compose here. Or if you are looking for some examples or ideas to get started with Compose why not check out the Awesome Compose Repo. The draft specification is available at compose-spec.io; we are looking for contributors to the Compose Specification along with people who are interested in building tools around the specification. Docker will continue to contribute to the specification and be an active member of the community around it going forward.
The post Announcing the Compose Specification appeared first on Docker Blog.

Chip Childers Takes Executive Director Role at Cloud Foundry

The Cloud Foundry Foundation has announced that its long-time Chief Technology Officer, Chip Childers, is assuming the role of executive director as of April 2. He is replacing Abby Kearns, who has accepted an executive role elsewhere, according to the organization.
Those familiar with the Cloud Foundry Foundation, an independent non-profit open source organization, will no doubt be disappointed about Kearns’ departure, though will certainly agree that Childers is a fine successor. In the CTO role since the company’s founding in 2015, he has been an articulate and empathetic ambassador for the foundation’s many technologies, including the flagship Cloud Foundry open source cloud application platform.
Cloud Foundry has been built and refined through the help of 3,600 contributors. More than 39,000 commits have been made to the platform in the last 12 months alone. Gartner research has estimated that Cloud Foundry’s total market value is approximately $3.1 billion, and will rise to $5.25 billion in a few years as more organizations move to open source-based cloud computing.
We caught up with Childers to learn about his new role, his immediate priorities, and his overall strategy to shepherd Cloud Foundry into the future.
What is your first priority as the new executive director?
My personal priority is to make sure the foundation staff, as well as the leaders in the community, are aligned around our shared priorities: supporting our community and bringing the power of Cloud Foundry to Kubernetes clusters everywhere. Let’s talk about those two a little further…
First, we will be very focused in the coming months on supporting the continued health of the contributing community. Cloud Foundry is one of the largest open source projects out there, and the project enjoys continued investment from all of the major participants who have been working on it for years now.  We can keep improving though and I will work specifically on increasing the inclusivity of our community. We want to look at how we can ensure even lower friction for casual contributors to the project. We want to find ways to make room for new contributors, while ensuring that the code keeps progressing.
Second, we are collectively aligned around the mission of bringing our world-class developer experience to Kubernetes users. This mission is now clearly the North Star of our contributors. It’s guiding all of our community members and their work. Obviously open source foundations like the Cloud Foundry Foundation don’t dictate roadmaps (we’re not a product company), but we do what we can to help our ecosystem align around shared missions like this. Our focus at the foundation is on helping community participants see the opportunities for collaboration that might not otherwise be obvious.
Kubernetes is the modern infrastructure abstraction, and it needs a cloud native developer experience to be most useful for organizations. I have a long history in infrastructure as a service. I have moved up the abstractions, as it were, getting closer back to my roots as an enterprise web developer from many years ago. If you look at the way the conversations around Infrastructure-as-a-Service were revolving around this idea that we were, as an industry, hopeful to achieve some type of utility compute model. The idea was that different infrastructure platforms would be roughly interchangeable. In reality, the VM-centric platforms simply couldn’t be a utility service in that way. Kubernetes and standardized container images are different, and they offer a better chance at utility. There’s certainly some variance in implementation detail, but also service quality — whether you’re talking about all the public cloud providers and their managed services, or rolling your own out to some infrastructure, or work that companies like SUSE and VMware are doing.
That’s the infrastructure story, but Cloud Foundry is all about developer experience. We have always had the experience of the developer as the top priority. This is the focus, and the years of success, that we hope to enable for Kubernetes users everywhere.
How is the executive director role different from the CTO role? 
For us, it’s all about moving to a model that reflects the maturity of the project. As I said earlier, our number one priority is to make sure that we’re very, very focused on supporting the technical contributing community. However, that shift isn’t something that’s new just because I’m stepping in at this point. It’s one that we were making very purposefully in our planning for this year.
Now, the pandemic has certainly thrown some things up in the air for us.
The summits that we had scheduled for this year were already planned to be very different than last year’s summits. For several years, our events have been a blend of trade show and community collaboration time. It had been that since 2015, when the foundation was started. This year, our event focus is on driving high impact collaboration opportunities for our contributing community. That’s number one. Number two, we want to enable collaboration for those in the outer rings of the community, those that may not be building the platform, but are heavily invested in it as individuals or as companies.
And so the way we designed the summits for this year was as a one-day event attached to the Linux Foundation’s Open Source Summit, both North America and Europe. We made that shift very purposefully to align with our 2020 goals.
Again, my priorities are in line with what we had already been planning and executing against. We’re just going to sharpen that a little bit further and make sure that we’re very clear that everybody on my team is going to find ways to help elevate our community, connect our community better.
But it’s not just sustaining the community because there’s there’s a lot of opportunity for continued diversification and continued community growth. I’ll give you an example of that. We’re piloting a mentor/mentee program right now. It’s for people that may have a casual interest in being connected with a member of the Cloud Foundry community. It’s a program that we think is going to be pretty valuable because it’s going to let us touch individuals that are all around the world.
We are also working to find some ways to help support the individuals that are part of our community in these very difficult times globally. We’ve got a kind of a virtual water cooler scheduled for twice a day to give people in Europe and the U.S. an opportunity to converse informally. I firmly believe that any opportunity for serendipitous conversations can be hugely beneficial to people’s well being.
Cloud Foundry also announced today that VMware’s Paul Fazzone will replace (Dell CTO) John Roese as the Cloud Foundry Foundation Board Chairman. Could say a few words about Paul? 
Paul has been on our board of directors for a number of years, serving as our treasurer for a number of them. His stepping in the chairman role is a very good thing for a couple of reasons. First, Pivotal was acquired by VMware and it’s clear that VMware has been doing an amazing job of bringing together all of their recent acquisitions. If you look at their acquisition of Bitnami, Heptio and Pivotal, and consider how they are putting them all together, it’s an impressive plan. Paul is responsible for R&D for the Tanzu portfolio, and his continued commitment to the Cloud Foundry project demonstrates the importance of our community’s project within Tanzu. I’m also inspired by a lot of the focus that Paul has had on making sure that VMware is a very, very good open source participant.
The Cloud Foundry Foundation, the Linux Foundation and VMware are sponsors of The New Stack.
Feature Image by Gerd Altmann from Pixabay.
The post Chip Childers Takes Executive Director Role at Cloud Foundry appeared first on The New Stack.

Operating in the New Normal

Operating in the New Normal If you’d have told me at the end of 2019 that within three months, the whole of Jetstack was going to be working remotely, facing one of the worst crises the world has seen for decades, I would have had a hard time believing it. But, sadly, this is the case, and as a team we are having to respond to some of the most challenging times of our lives.

Hiding a swarm container behind a vpn

I have a swarm of 3 nodes, all are firewalled by my infrastructure provider (digital ocean) and only one entry node is publicly accessible. The entry node is running a reverse proxy container (Traefik) and also an open-vpn container (udp 1194), whilst another node is running a hello-world ‘whois’ container running on port 80. I have a wildcard certificate for my domain from letsencrypt. These containers are on the same overlay network created with `–attachable`. How can I ensure that the whois container only resolves and I can only communicate with it when I am on the vpn? My progress so far: I’ve gathered that I need to make this whois container not discoverable or routed to by reverse proxy, and instead create a private network (?) running this container which is on the vpn. Is this what I need to do, and how can I do this? I have run `ip -4 address add <some 192.168 ip here> dev eth0` on the entry node, and I can only ping this from my laptop when I’m on the vpn so I feel this is a start. How can I make this work with docker? Openvpn on swarm requires using another container to launch and run this in privileged mode, so I have this working. I can connect to the vpn successfully. submitted by /u/J7mbo [link] [comments]

Using word modifiers with Bash history in Linux

Image

Photo by Pressmaster from Pexels

So you’ve mastered parsing Bash history? Now you’re ready to explore Bash modifiers available to the history command.

Read the full article on redhat.com

More Linux resources

Download Now: Linux Commands Cheat S

Posted April 7, 2020

|

by
Seth Kenlon (Red Hat)

Topics:  
Bash  
Linux  

Git at 15: How Git Changed the Way We Code

Fifteen years ago a number of the Linux kernel developers tossed their hands in the air and gave up on their version control system, BitKeeper. Why? The man who held the copyright for BitKeeper, Larry McVoy, withdrew free use of his product on claims that one of the kernel devs had reverse engineered one of the BitKeeper protocols.
Linux creator Linus Torvalds sought out a replacement to house the Linux kernel code. After careful consideration, Torvalds realized none of the available options were efficient enough to meet his needs:
That a system does the opposite of what Concurrent Versions Systems does.
It would support a distributed workflow (similar to that of BitKeeper).
It must offer safeguards against corruption.
The project must scale to meet the intense demand of Linux kernel development.
Patching should take no more than three seconds.
Given Torvalds’ prowess as a developer, he quickly realized his only choice was to create the tool himself. And so, on April 7, 2005, Mr. Torvalds launched his new project, git. According to Torvalds, the project was named after himself.
Issue the command man git and you’ll see the official name of git includes the slightest bit of humor (Figure 1).
Figure 1: The stupid content tracker.
Since its original release, git has become an incredibly efficient and easy to use tool. Git is also one of the most widely used source code management systems on the planet. According to the 2018 Stack Overflow Annual Developer Survey (the last year they included a Version Control question), 87.2% of developers used git.
Just What Is git?
For those select few developers who have never experienced git (or those who aren’t developers, but are curious), git is an open source Versioning Control System (VCS). A VCS is a tool/system/service used for the management of changes to documents, computer programs, websites, and just about any collection of information. With regards to software development, a VCS helps a team of developers manage changes to source code over time.
“Don’t worry, we’ve all been there. We’ve all made mistakes. Git makes it easy to undo every mistake you can make and then some.”– Sean Callen
But git approaches the task a bit differently than most Version Control Systems. Most systems store information as a list of file-based changes, so both the file and the changes made to the file over time is stored. Git, on the other hand, looks at data as a series of snapshots on a miniature filesystem. Any time a developer commits or saves a project, git takes a snapshot of those files and stores a reference to the snapshot. Git takes this one step further, runs a diff on the files, and doesn’t store any file that hasn’t changed, but links back to the previously stored (identical) file.
This method of storing snapshots makes git incredibly efficient.
Git also works both locally and remotely. Developers install git on their computer and then can pull projects, work with them locally, and push their changes back. The basic git workflow goes like this:
Create a repository (a project) with a git hosting tool (such as GitLab).
Clone the repository to your local machine.
Add a file to your local repository.
Commit the changes.
Push the changes back to the remotely hosted project.
Source: Eclipse Community Surveys.
Changing How We Code
But how has Git changed the landscape of development? Ask just about any developer around the globe and the conclusion you will draw is “profoundly.”
In fact, Sean Callen, Director of Web Engineering at System76, said, “It’s hard to quantify just how much better git is coming from SVN and IBM Rational ClearCase.” Callen qualifies that, however, by adding, “Previous tooling felt clunky and collaboration was never as fluid as it was with git. Toss GitHub into the mix and you’ve got an experience that’s unrivaled by the others.” More importantly, however, Callen believes that git has certainly made his work easier.
But what kinds of changes has git made since those early days? “I have somewhat of an obsession with a clean git history,” Callen said, elaborating that “improvements to git-rebase and git-cherry-pick have made it possible to work collaboratively with others while maintainable a usable history.” The git-rebase and git-cherry-pick tools have come to Sean’s rescue not only for his own work, but when helping other team members.
“When Git appeared, as a novelty, it was interesting to see the ‘different way’ it presented things,” said Mario Danic, lead developer/senior software engineer for Nextcloud. “And while the ideas behind Git remain unchanged, the overall experience (for me) has changed significantly due to improvements to the facade which make it not only simpler to use even in complex situations but also lowers the barrier to entry and education needed for the more junior developers.”
And speaking of tools, Callan proclaims the command line interface (CLI) is the way to go. “I’ve tried some GUIs but I found time and time again that I came back to CLI to get things done. One of git’s strong suits may very well be its approachable CLI.” From within that CLI, git has, over the years, built a “plethora of tools that enhance the CLI experience further, like GitHub’s own recently released CLI tool,” Callen claimed.
What Does the Future Hold?
It’s anyone’s guess what the future holds for git. Now that the git volunteer development released its own CLI tool, the sky’s the limit for what programmers and teams of programmers can do with the software. But what would programmers want from future git releases? If Sean had his way, he’d be happy if git did nothing. “I’m happy with git and how well it has performed for me over the years. From enterprises to small start-ups, popular open source or personal project, I can’t recall a time I felt git was lacking.”
“One of the bigger shortcomings that git had in its beginnings was lack of documentation. I think that’s being rectified more and more,” Callen said. As to its biggest shortcomings, Callan warns, “While it’s most definitely widely spread out and adopted by the developers, junior people just coming in still have the notion that git is somehow ‘hard’ because that’s how it was in the beginning. I guess this will quickly fade out as people realize that is not the case and as they learn more about it, but it will take time.”
Callen advised new users, “Don’t worry, we’ve all been there. We’ve all made mistakes. Git makes it easy to undo every mistake you can make and then some.”
The post Git at 15: How Git Changed the Way We Code appeared first on The New Stack.

Remote support options for sysadmins

Image

Photo by ThisIsEngineering from Pexels

Remote support often feels like you’re trying to wash dishes from across the room. Find out how to get closer to the sink and your users.

Read the full article on redhat.com

Career Advice

Take a sysadmin skills assessment

Posted April 7, 2020

|

by
Ken Hess (Red Hat)

Topics:  
Remote work  
Sysadmin culture  

Running command in the kubernetes manifest yaml file as non-root

Hello, I’m searching if there is a method on how to run commands in manifest yaml file as a non root, I could not find anything on the internet it seems that it’s not possible. I’m using : command: [ “/bin/sh” ] args: [ “-c”, “cmd”] Do you you guys have any idea? Thank in advance submitted by /u/Seh_yoji [link] [comments]

FluetBit alternatives (logging)

Hi, I’m currently using FluentBit to do a lot of the grunt work with logs and sending them to ElasticSearch. We are hitting a lot of issues with transportation and dynamic indexing, so we have decided to look at what other options are around. I was hoping to get some recommendations (with a reason) so that we don’t have to start from scratch 🙂 Thanks! submitted by /u/airwalk225 [link] [comments]

Static Route in Docker Container

Hey guys, can anybody point me in the right direction. I want to set a permament route in a docker container. I tried this but on every restart (no rebuild) the route is missing. I tried this 2 commands, but it didnt stick. echo “1 local” >> /etc/iproute2/rt_tables ip route add 192.168.1.0/24 via 192.168.2.1 dev eth0 table local The best way would be to set the ip route on docker start with a variable or command but i think this isnt possible? I would need to create a dockerfile or am i wrong? submitted by /u/RyperX [link] [comments]

patterns for orchestrating docker containers

Hi all, I m wondering what patterns people use to achieve some sort of ‘orchestration’ among containers. Let say in situation like the following: there are couple of container that needs SSL certificates – some are web applications or sites served over NGINX, and some are service like LDAP and TURN that needs certificates to use secured protocols. Now, certificates should be first obtained using NGINX, then other services depending on certificates shall be started. Also, there is a situation when some (or all) certificates has to be renewed. In that case, already running containers (depending on certificates) should be restarted/reloaded. Personally, I would like somehow to split ‘cron’ from NGINX container and do obtaining/renewing certificates in it. When this job is done, based on list of certificates obtained/renewed some sort of ‘signal’ shall be sent to other, affected containers so they can reload (or restart) service in it. Additionally, if LetsEncrypt is used, I guess somehow whole /etc/letsencrypt shall be copied into running container(s) and proper ownership flag shall be set before service reload. This is in case if I want to run container as unpriviledged user. One of the things I considered is to use docker socket mounted in volume, but this leaves security gap I would like to avoid. Other thing I was thinking about was some publisher-subscriber (like MQTT), but in this case I m not sure how much work will be to develop a script that ‘handles’ process inside container; and I would like to avoid having additional ‘management’ process inside a container if possible. Thank you kindly submitted by /u/nikoladsp [link] [comments]

Docker for web hosting

Hi, I want to know if websites hosted on docker are secured and if they cannot see each other. Also I want to know if they cannot ping each other. Please let me know. And any tools that can be used to orchestrate or manage the hosting. Thank you submitted by /u/donrayss [link] [comments]

Send Different Environment Variables to Pods created from a Job

I am trying to make a K8s Job that will start pods when files are added to a folder. Pods will be running containers that process the files. I would like for the job to create a new pod for each file when they are added to the folder, and have it set an environment variable (containing the file’s name) in the container running in the newly created pod. My constraint is that I can not edit the container, I can only set it’s environment variables. Does any one have any hints on how I could do this in the job spec? submitted by /u/tetrafx [link] [comments]

Rancher 2.4 Keeps Your Innovation Engine Running with Zero Downtime Upgrades

Delivering rapid innovation to your users is critical in the fast-moving world of technology. Kubernetes is an amazing engine to drive that innovation in the cloud, on-premise and at the edge. All that said, Kubernetes and the entire ecosystem itself changes quickly. Keeping Kubernetes up to date for security and new functionality is critical to any deployment.
Introducing Zero Downtime Cluster Upgrades In Rancher 2.4, we’re introducing zero downtime cluster upgrades.

CKA vs CKAD

A while back I purchased a voucher for both the CKA and CKAD exams for $300. I recently passed CKA exam and wanted to know the the difficulty difference between the two. I’m going with Mumshad’s course (like every one else). Some of the labs require you to be super fast. I know the exam is shorter and you get less time (24 q / 3 hours vs 19 q/ 2 hours). I finished CKA in 2 hours and get a 85%… but I wanted to know if you had any major time issues. And just other tips. submitted by /u/vennemp [link] [comments]

Is maintaining all of these Kubernetes app getting to be too complex?

Im talking about infrastructure apps such as external-dns, cert-manager, nginx-ingress, prometheus-operator, kube-metrics-adapter, etc, etc. Each of these works a little differently on each cloud and one can depend on another. This makes troubleshooting an error with one of these apps difficult sometimes. What do you guys think? submitted by /u/gargar454 [link] [comments]

Sysdig’s Kris Nóva: We Can Never Be Prepared But Open Source Can Help

Three years in the software industry is like a score in the rest of the world. It’d seem futile to write a book because, by the time it’s published, it’s outdated. But for Kris Nóva, who co-authored “Cloud Native Infrastructure” back in 2017, much of it still rings true today. After all, when you take a step back from the brands and the tooling, the transparency-focused culture and the declarative infrastructure is still the same.
In this episode of The New Stack Makers podcast, we talk to Nóva, chief open source advocate at Sysdyg, about the progression of the open source world and her perspective examining it through the lens of San Francisco’s COVID-19 lockdown. She calls the book she wrote with Justin Garrison a kind of thesis that looks to predict the infrastructural patterns that could solve a lot of the challenges cloud native infrastructure teams face.
Most of their predictions, like wrapping up infrastructure into a standardized API, were dead on. But they couldn’t have foreseen how invested the three big cloud providers — Amazon Web Services, Google Cloud Platform, and Microsoft Azure — going all-in on this idea of building a vendor-neutral open-source community-backed solution. In turn, this allowed for other cloud providers like DigitalOcean to compete.

Subscribe: SoundCloud | Fireside.fm | Pocket Casts | Stitcher | Apple Podcasts | Overcast | Spotify | TuneIn
Nóva said, “It’s cool to see people coming together in this community and the cluster API not working just for the big three as we had originally planned, but also working for the little guys or the new guys. And I think that’s just a good example of a good abstraction and a good design.”
But what about security? Can open source really be more secure? From SSL to GPG keys to TLS, she points out that a lot of the basis for security is in the open source space.
“In a weird way, the actual intellectual property of keeping them [open source projects] secure has more eyes on it, more contributions and more support. And it’s used in different ways. And I think that makes for a healthier, more rounded, security implementation,” Nóva said.
Although she admits open source for security is a harder sell. Sysdig is the creator of Falco, which she calls “the only runtime open source security tool out there.” She says they don’t have difficulty finding and protecting users, but gathering use cases and testimonials is inherently harder. She says that no one wants to be the end-users demonstrating publicly what’s keeping their systems secure.
In the end, Nóva says there’s no such thing as perfect or safe software. And you can only prepare for so much.
Reflecting on her forced home quarantine, she said, “I don’t think California, the state where I live, or even the country I live in, was prepared for a lot of this. This [pandemic] kind of caught us off guard. And I think you see that same pattern in open source in technology… For lack of a better term, there are things that happen, that you’re not prepared for. So I think to have a good set of monitoring and detection tools in place, whether they’re open source or not, is going to be more and more important as we start to prepare for the unexpected.”
Photo by Sangga Rima Roman Selia on Unsplash.
The post Sysdig’s Kris Nóva: We Can Never Be Prepared But Open Source Can Help appeared first on The New Stack.

X- ITM

Cloud Computing - Consultancy - Development - Reverse Engineering

Nested Environments - High Availability Services

 

X-ITM UK X-ITM France X-ITM Russia X-ITM USA X-ITM HK X-ITM Netherlands X-ITM Australia
Providers of Private point to point  World Wide VPN encrypted networks Providers of Private  World Wide Communications  with 16 digits dial codes Providers of World Wide Cloud Services Hosted on Underground Facilities Providers of  Support and Consultancy Services to Infrastructures and Installations
Please contact us for other services or options,

*X-ITM is entitled to terminate any user in breach our terms and  services

 

X- ITM