Skip to content

Instantly share code, notes, and snippets.

apiVersion: v1
kind: Namespace
metadata:
name: debug
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: debug-app
namespace: debug
@brtkwr
brtkwr / K8s-Raw-Block-Kata.md
Created April 12, 2019 14:57 — forked from amshinde/K8s-Raw-Block-Kata.md
K8s Raw Block storage support with Kata

Running Kata Containers in Minikube for Kubernetes 1.14+

minikube is an easy way to try out a kubernetes (k8s) cluster locally. It utilises running a single node k8s stack in a local VM.

Kata Containers is an OCI compatible container runtime that runs container workloads inside VMs.

Wouldn't it be nice if you could use kata under minikube to get an easy out of the box experience to try out Kata? Well, turns out with a little bit of config and setup that is already supported, you can!

@brtkwr
brtkwr / install_mosh_locally.sh
Created November 9, 2018 22:30 — forked from lazywei/install_mosh_locally.sh
Install mosh server without root permission
#!/bin/sh
# this script does absolutely ZERO error checking. however, it worked
# for me on a RHEL 6.3 machine on 2012-08-08. clearly, the version numbers
# and/or URLs should be made variables. cheers, zmil...@cs.wisc.edu
mkdir mosh
cd mosh
@brtkwr
brtkwr / TFLunarLander-v2.md
Last active March 6, 2017 12:03 — forked from warchildmd/lunar_lander_v2_0_1.py
Q-learning implemented on Tensorflow with epsilon hyper-parameter for choosing action

Comments

This is the best Q-learning algorithm implemented on TensorFlow that I stumbled upon before I found https://github.com/brtknr/general-gym-player.

The reason I decided not to use this particular script was because I did not quite understand why there is a need to specify a value of epsilon for choosing action. It felt a bit arbitrary defined to me and not generally applicable to other environments.

@brtkwr
brtkwr / MyLunarLander.py
Last active April 23, 2018 08:40 — forked from awjuliani/rl-tutorial-2.ipynb
Reinforcement Learning Tutorial 2 (Cart Pole problem)
import numpy as np
import pickle
import tensorflow as tf
import matplotlib.pyplot as plt
import math
import gym
env = gym.make('LunarLander-v2')
print ('Shape of the observation space is', env.observation_space.shape)
@brtkwr
brtkwr / DQN-LunarLander-v2.md
Last active March 2, 2017 12:00
LunarLander-v2 DQN agent

DQN implementation of Keras-RL used with epsilon-greedy per-episode decay policy.

Requirements that can be installed using pip:

Forked and modified from the original to be compatible with the following:

@brtkwr
brtkwr / VPG-CartPole-v0.md
Last active March 2, 2017 11:43 — forked from domluna/README.md
Vanilla policy gradient, no baseline

This has been forked from the original and modified to be compatible with:

  • tensorflow.__version__ = 0.12.1
  • gym.version.VERSION = 0.7.3

Run with defaults on terminal

$ python vpg.py