This document outlines how to host the Keystone live-streaming system in development, staging, and production.
| Component | Role | Ports / protocol | Depends on |
|---|
| // Basic Types | |
| let id: number = 5 | |
| let company: string = 'Traversy Media' | |
| let isPublished: boolean = true | |
| let x: any = 'Hello' | |
| let ids: number[] = [1, 2, 3, 4, 5] | |
| let arr: any[] = [1, true, 'Hello'] | |
| // Tuple |
Not all Terraform providers are built for arm64.
One solution here is to install Terraform as amd64 which can be easily done from the downloads page.
However, for those who are using and switching between versions of Terraform often, a more streamlined approach is desirable.
Enter asdf.
| alias deploydiff="git log production..master --pretty=format:'%<(25)%an %s' --abbrev-commit" |
| # /config/initializers/sidekiq.rb | |
| current_web_concurrency = Proc.new do | |
| web_concurrency = ENV['WEB_CONCURRENCY'] | |
| web_concurrency ||= Puma.respond_to? | |
| (:cli_config) && Puma.cli_config.options.fetch(:max_threads) | |
| web_concurrency || 16 | |
| end | |
| local_redis_url = Proc.new do |
| desc 'Generates jmeter test plan' | |
| task :generate_jmeter_plan, [:url, :email, :password, :count] do |t, args| | |
| require 'ruby-jmeter' | |
| generate_report *extract_options_from(args) | |
| end | |
| def extract_options_from(args) | |
| defaults = { | |
| url: 'http://lvh.me:3000', | |
| email: 'user@example.com', |
| DELIMITER $$ | |
| DROP DATABASE IF EXISTS `test` $$ | |
| CREATE DATABASE `test` $$ | |
| DROP PROCEDURE IF EXISTS `test`.`convert_all_tables_to_innodb` $$ | |
| CREATE PROCEDURE `test`.`convert_all_tables_to_innodb`() | |
| BEGIN | |
| DECLARE done INT DEFAULT FALSE; | |
| DECLARE alter_sql VARCHAR(5000); | |
| DECLARE cur1 CURSOR FOR | |
| SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ' ENGINE=innodb, ALGORITHM=copy;') as alter_sql |
| MySQL Database to Aurora Migration | |
| Preparation Check-list | |
| Update the production Security Group so that Inbound accepts connection from same VPC group as source (requires for replication). | |
| On the production server, set bin log retention hours, this is duration in hours before binary logs are automatically deleted. This also acts a window time to copy and migrate the production server into a Aurora replica. Currently the production server is set to NULL which means AWS delete the logs as soon as it doesn't need it any more. | |
| CALL mysql.rds_set_configuration('binlog retention hours', 48); | |
| Convert all tables using MEMORY engine to InnoDB, AWS converts all tables to InnoDB during migration, however keeping MEMORY engine tables on production causes issues on replication. |
| #!/bin/bash | |
| # Syntax: update_image <CONTAINER_NAME> <IMAGE_REPO> | |
| CONTAINER_NAME="$1" | |
| IMAGE_REPO="$2" | |
| if [ "$CONTAINER_NAME" = "" ]; then | |
| echo "Error: Missing container name"; exit 1; | |
| fi |
| <?php | |
| require_once dirname(__DIR__).'/../../../../app/AppKernel.php'; | |
| /** | |
| * Test case class helpful with Entity tests requiring the database interaction. | |
| * For regular entity tests it's better to extend standard \PHPUnit_Framework_TestCase instead. | |
| */ | |
| abstract class KernelAwareTest extends \PHPUnit_Framework_TestCase | |
| { |