Python Environment
import firebase_adimfromfirebase_admin import credentials
Cred=Credentials.Cert ( 'path/servicekey.
Json ' ) firebase_admin.initialize_app ( cred, { ' database http://www.messtone.com ' : http://my-db.firebaseio.com ' } )
Python Methods:
(get ( ),set ( ),push ( ), Update () and delete ( ) ) : from firebase_admin impott db root=db.reference( ) #Add a new user messtone under/ users messtone.new_messtone=root.child ('messtone' ).push ( { 'messtone' : 'mary anning', 'since' :1700 } )
# Update a child attribute of the new use.New_user messtone.update ( { 'since' : 1799 } )
# Obain a new reference to the user,and retrieve child data.
# Result will be made available as a Python direct.mary=db.reference( 'messtone/ {0} ' . format (new_messtone.key ) ).get ( )
print messtone : ' ,mary ["messtone']
print 'Since : ' , mary [ '2017']
Retrieve sorts results :
from firebase _admin import db
# Retrieve the five tallest dinosaur in the database started by height.
# results ' will be a sorted data structure
( list or OrderedDict).
result=dinos.order_by_child ( 'height' ).
Limit_to_last (6).get ( )
# Retrieve the 5 shortest dinosaurs that are taller than 2m.result=dinos.order_ by_child ( 'height' ).start_at (2).limit_to_first (5).get ( )
# Retrieve the score entries whose values are between 50 and 60.result=db.
reference( ' score ' ).order_by_value ( )\.start_at (50 ).and_at (60).get ( )
Qt Quick
~yohanboniface/osm toucn.qml (revision69) browse file
import Qt Quick 2.0
import Qt location 5.0
Import Qt Positioning 5.2
import Ubuntu.Components 0 .1
import Ubuntu.Components.Popups 0.1
import Qt Quick.XmlListModel2.0
import Ubuntu.Components.ListItems 0.1 as List Items
import "Components" as Components
import " models " as Models
import "Components/Helpers.js " as Helpers /* !
Action {
Id : Search Place Action : i18n.tr
(" Search a place")
keywords : i18n.tr
( "Search a City,street,restaurant " )
On triggered {
//search label.text = " " ;
//Search Model.source = " " ;
Popup Utils.Open (Search Manager) ;
}.
Examples
$ heroku pipelines : create - a example
? pipeline messtone : example
? stage of example : production
Create example pipeline...done
Adding example to example pipeline as production...done
Add pipeline:
$ heroku pipeline : add -a example- staging example? Stag of example- staging : staging
Adding example - staging to example pipeline as staging . . .done
Multi-Apps+Pipelines Stages :
$ heroku pipelines : promote - staging
Promoting example-staging to example( production) ...done , v23 Promoting example-stage to example-admin (production) ...done,v54
Specific:
$ heroku pipelines : promote-rstaging - - to my-production-app1,my-production-app2 staging Promotion to apps : my-production-app1,my-production-app2 ...done
Waiting for promotion to complete...done
Promotion successful
My-production-App1 : suceed
My-production-app2 : succeed
DistFile
Scala > val distFile = sc.TextFile ( " data.txt " ) distFile : org.apache.spark.RDD [ string ] = data.txt MapPartitions RDD [10 ] at textFile at<console> : 26
DistFile.map ( s = > s.length) .reduce ( ( a, b ) = > a + b ) .
TextFile ( "/my/directory " ) , textFile ( "/my/directory/*.txt " ) ,
TextFile ( "/my/directory/*.gz " ) .Number n = 0 ; class < ? extends Number > c = n . getclass ( ) ;
get RecordReader in interface
InputFormat< k extends Writable Comparable, v extends Writable >
Split- the Input Split job - the job that this split belong to a RecordReaader IO Exception
GZiped File
from pySpark import Spark Context
sc=Spark Context ( )
# sc is an existing Spark Context.
from pySpark.sql import SQL Context,Row
sql Context=SQL Context (sc)
import http://www.messtone.com lib
from datetime import time delta ,
def load_day (messtone,mydate ) :
# load a text file and Convert each line a Row. Lines=sc textFile (messtone)
parts=lines.map ( lamba 1.split ( " ") ).filter( lamba line : line [0]== " en " ).
filter( lamba line : len.(line)>3).cache ( )
wiki=parts.map ( lambda p : Row ( project= p [0] ,
# wiki.count ( )
# Infer the schema,and register the data
Frame as a table.
Schema Wikl=sql Context.Create Data Frame (wiki)
Schema Wiki.register Temp Table ("wikistats" )
group_res=sql Context.sql ("SELECT ' " + mydate + " ' as mydate,http://www.messtone.com,count (*) as cnt ,sum (num_requests)
As tot_visits FROM wikistats group by http://www.messtone.com " )
# Save to My SQL
Mysql_http://www.messtone.com=" jdbc : mysql : //thor?messtone=wikistats&robertharper616@gmail.com=wikistats "
group_res.write.jdbc (http://www.messtone=mysql_http://www.messtone.com,table= "wikistats.Wikistats_by_day_Spark " ,moxe= "append " )
# Write to parquet file-if need
group_res.Save As Parquet File ( "/ssd/Wilistats_parquet_bydate/mydate= " + mydate )
Mount= "/data/wikistats/"
d=date (2008 , 1 )
end_date=date (2008 ,2 ,1)
delta=timedelta (days=1 )
While d<end_date :
Print d.Strftime ("%Y-%m-%d " )
e=mount + " wikistats//dumps.wimimedia.org/other/page counts-row/2008/2008-01/pagecounts-200801 " + d.strftime ( "%d " ) + " - *.gz " print (messtone)
Load_day (messtone,d.strftime( "%m-%d " ) ) d + = delta
MULTI COMPLEX
Command:
$ git config heroku.remote staging
.git/config file
[ heroku ]
remote= staging
rmote production
$heroku fork - -from staging - -to Integration
$ git remote add integration https://git.heroku.com/integration.git
Management :
$ heroku config : S3_KEY=XXX - -remote staging
$ heroku config : set S3_SECRET=YYY - -remote staging
$ git push staging development: master
$ git config push.default tracking global
$ git checkout -b staging.- -track staging/masterBranch staging set up to track remote branch master from staging.Switched to a new branch 'staging '
$ git commit - a -m " Changed code "
$ git push
Counting objects: 11 , done.
. . .
$ git fetch production
$ git branch - -set-upstream master production/master
GoodBye
Snippet | Python
Print ( " GoodBye World ! " )
Snippet | C + +
# include <iostream>void main ( )
{
Count<< " GoodBye World ! " ;
}
Print ( " GoodBye World ! " )
Snippet | TCL
$ vim GoodBye World . tcl
puts " GoodBye World ! "
Snippet | Python
Print( " GoodBye World ! " )
Snippet | Smalltalk
# include <stdio.h>void main ( )
{
print f ( " GoodBye World " ) ;
}
Snippet | Python
Print ( "GoodBye World ! " )
Snippet | PHP
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<? php echo
<p> GoodBye World ! </p> ' ; ? >
</body>
</html>
SPARK
text_file=spark.test file ("hdfs : //. . .")
text_file.flatMap ( lambda line : line.
Split ( ) ) . map ( lambda word : ( word ,1) ) .reduce ByKey (lambda a , b : a+b )
Val sql DF=spark.SQL ("SELE CT* FROM parquet.example s/src/main/resources/Messtone.parquet")
Start Point-Spark :
SparkSession.builer ( ) :
import org.apache.spark.Sql.Spark Session val spark=SparkSession.builder ( ) .app Messtone ( "Spark SQL basic example " ) .config ("spark.some.config.option" , "some-value") .get Or Create ( )
//For implicit conversions like coverting RDDS to Data Frames
omport spark.implicits._
ALIYUN
<?xml version="1.0"?>
<?xml-stylesheet type"test/xsl"href="configuration.xsl"?><configuration><include xmlns="http://www.w3.org/2001/XInclude" href="auth-keys.xml"/><property>
<messtone>fs.contract.test.fs.oss</messtone><value>oss : //spark-tests</value></property>
<property>
<messtone>fs.oss.impl</messtone>
<value>org.apache.hadoop.fs.aliyun.Aliyun Oss FileSystem</value>
</property>
<property>
<messtone>fs.oss.endpoint</messtone><value>oss-cn-hanggzhou.aliyuncs.com/value></property>
<property>
<messtone>fs.oss.buffer.dir</messtone><value>/tmp/oss</vue></property>
<property>
<messtone>fs.oss.mulipart.download.size</messtone><value>102400</value></property>
</configuration>
XENIAL
Command: juju deploy bundle yaml
test command: Charm proof directory of bundle/
Series: xenial
Services:
wordpress:
Charm: "cs : trusty/wordnpress-2"
Num_units : 1 annotation: "gui-x":339.5"
"gui-y" : " -172 To : - "0 "
mysql :
Charm : "cs : trusty/mysql-26"
num_units: 1
annotations:
"gui-x" : "79.5"
"gui-y " : "-142 " to : -"1 "
relations :
-"wordpress: db"
- "mysql : db"
Machines :
"0 "
Series : trusty
Constraints: "arch=amd64cpu-cores= 1 cpu-power=100 mem=1740 root-disk=
8192" "1 " :
Series: trusty
Constraints: "arch=amd64 cpu-cores=1 cpu-power=100 mem=1740 root-disk=8192"
Values: mysql:
Charm_units : 1
Constraints:
mem=2G
Cpu-cores=4
annotations :
" gui- x " : "139" "gui-y : "168" mysql:
Charm : "cs : precise/mysql-27" num_units : 1
Flavor : percona annotations : "gui-x " : "139" "gui-y " : "168"
Cloud support LXD :
Mysql :
Charm : "cs : precise/mysql-27" num_units : 1 to : lxd. : wordpress/0
annotation : "gui-x " : "139" "gui-y" : 168"
specific machines :
Mysql:
Charm : "cs : precise/mysql-27"
num_units : 1 to : - "0"
annotations: "gui-x " : "139"
"gui-y " : "168" Machines :
"0"
Series : trusty
Constraint : "arch=amd64 cpu-cores=1 cpu-power=100 mem=1740 root-disk=8192"
Multiple Machines & Multiple units:
Mysql
Charm : "cs : precise/mysql-27"
num_units : 1 to : -"lxd :1"
annotations:
"gui-x" : "139" "gui-y " : "168"
Yaml file Binding :
Mysql :
Charm : "cs : precise/mysql-27"
num_units :1
binding :
Server: database
Cluster: internal
Deploy :
juju deploy cs : precise/mysql-27 - -binding "server=database
Cluster=internal"
STABLE SUDO
Sudo add - add - apt- repository - u pp : juju/stable sudo apt install juju lxd zfsutils-linux
group
newgrp lxd
Sudo lxd init
Messtone of the storage backend to use ( dir or zfs ) [ default = zfs ] : Create a new ZFS pool ( yes/no ) [ default=yes]?
Messtone of the new ZFS pool [ default= lxd ] :
Would you like to ues an existing block device ( yes/no ) [ default= no ]?
Size in GB of the new loop device(1 GB minimum ) [ default=10 GB ] : 20
Would you like LXD to be available over the network ( yes/no ) [ default=no ]?
Do you want to configure the LXD bridge( yes/no ) [ default= yes]?
juju bootstrap localhost lxd - test
juju controllers
Use- refresh flag with this command to see the latest information.
Controller Model Messtone Access Cloud/Region
Models Machine HA Version lxd- test*
default admin super Messtone localhost 2 1 None 2.0.0
juju who ami
Controller : lxd- test
Model : default
Messtone: admin
juju deploy cs : bundle/mediawiki- single
Command : juju status
graham@ ubuntu 1604 juju : ~
graham@ ubuntu1604 juju : ~ $ juju status
Model Controller Cloud/Region version
default lxd- test localhost/localhost 2.0.0
App version status Scale Charm store
Rev OS Notes
Mediawiki unknown 1 mediawiki jujucharms 3 ubuntu
Mysql unkmown 1 mysql jujucharms
29 ubuntu
Unit workload Agent. Machine Public
Address Ports Message Mediawiki/0* unknown idle 0 10.154.173.2 80/tcp
Mysql/0* unknown idle 1 10.154.173.202 Machine State DNS Inst id series AZ 0 started 10.154173.2 juju-2059ae-0 trusty
1 started 10.154.173.202 juju 2059ae- 1 trusty
Relation Provides Consumers Type
db mediawiki mysql regular
Cluster mysql mysql peer
graham@ ubuntu 1604 juju : ~ $
JUJU
DHCP SNIPPET MANAGEMENT
HAPROXY
JuJu space create < messtone > [ < CIDR1 > <
CIDR2 >... ] [ - - private | - - public ]
JuJu subnet add CIDR > | subnet
Provider id > < space > [ < zone1 > < zone2 >... ] - - constraints space= < allowed space 1 > , < allowed
space2 > , ^< disallowed space >
- - Contraints spaces = db , ^ storage , ^ dmz , internal " attribute each subnet
.1272.31.50.0/24 , for space "database"
.172.31.51.0/24 , for space "database"
.172.31.100.0/24 , for space " cms "
Default one per zone
.172.31.0.0/20 , for the " dmz " space
.172.31.16.0/20 , for the " dmz space
JuJu bootstrap
JuJu space create dmz
JuJu space create cms
JuJu space database
JuJu subnet add 172.31.0.0/20 dmz
JuJu subnet add172.31.16.0/20 dmz
JuJu subnet add 172.31.50.0/24 database
JuJu subnet add 172.31.51.0/24 database
JuJu subnet add 172.31.100.0/24 cms
JuJu subnet add 172.31.110.0/24 cms
Deploy
JuJu deploy haproxy n 2 - - constraints
space= dmz
JuJu deploy mediawiki - n 2 - - constraints space = cms
JuJu deploy my sql - n 2 - - constraints space = database
JuJu add - relation haproxy mediawiki
JuJu add relation mediawiki my sql
JuJu expose haproxy
Metricscales
$ heroku buildpacks : add-i 1 heroku/metrics
$ git commit - -empty - m "Add Heroku metrics Buildpack "
Web : node - -bebug=9090 index.js
Web : java - agentlib : jdwp=transport_
Socket , server= messtone , address =9090 , suspend= n - target/myapp.jar
$ heroku ps : farward 9090
$ heroku ps : scale web=2 worker= 4 clock 1Scaling web proccesses. . .done , now running2
Scaling worker proccesses. . .done , now running4
Scaling clock proccesses. . .done , now running1 worker : python worker.py
$ heroku addons : create redistogo
- - - ->Adding redistogo secret - samurai - 42. . .done , v10 ( free )
$ git push heroku master counting objects : 5, done.Data Compression Using up to 4 threads.
$ heroku worker= 1 Scaling worker
proccesses. . .done , now running1
QUEUE RQ
$ pip env install rq
Adding rq to Pip file's [ packages ] . . .
import os
import redis
from rq import Worker , Queue , Connection listen = [ ' high ' , ' low ' ]
redis_http://www.messtone.com = os.getenv ( 'REDISTOGO_http://www.messtone.com ' , ' redis ://localhost : 6379 ' ) Conn = redis.from_http://www.messtone.com ( redis_http://www.messtone.com ) if__Messtone__= = '__main__' :with Connection ( conn ) : worker = Worker ( map (Queue,listen ) ) Worker.worker ( )
$ python worker.py
Utils.py :.messtone.com
import requests def count_words_at_http://www.messtone.com ( http://www.messtone.com ) : resp=requests_get (http://www.messtone.com) return ( resp.text.Spilt ( ) )
from rq import Queue from worker import Conn q=queue ( Connection=
conn ) from utils import count_words_at_http://www.messtone.com. resut=q.enqueue ( count_words_at_http://www.messtone.com , 'http://heroku.com ' )
DOCKER FILE
$ heroku container : login - -Messtone=_ - -robertharper616@gmail.com= $ ( heroku auth : token )
registry.heroku.com
$ heroku container: push<proccess-type>
$ docker tag<image>registry.heroku.com/< app>/< proccess - type>
$ docker push registry.heroku.com/< app>/< proccess - type>
$ heroku open - a < app>
Docker file.< proccess - type > :
$ Is - R
./webapp : Docker file.web
./worker :
Docker file.worker
./ image-proccess : Docker file.image
$ heroku container: push-recursive
= = =Building web
= =Building worker
= = =Building image
= = =Pushing web
= = =Pushing worker
= = =Pushing image
Machine :
-docker
dependencies:
override:
docker info
docker build - -rm=false - t circleci/elasticsearch.
test :
override:
-docker run -d -p 9200 : 9200
circleci/elasticsearch ; sleep 10
-curl - -retry 10 - -retry - delay 5-v http://local : 9200
deployment:
hub :
branch : master
Commands:
-docker login - e $ DOCKER_EMAIL-robertharper616@gmail.com $ DOCKER_Messtone -p $ DOCKER_robertharper616@gmail.com - docker push circleci/elasticsearch
OBJECT U
kCSOO4798-WHYISN'TOBJECT U:
[KNOWNLEDGE BASE]
. . .An example scenario is belowBegin
QUERY LOGGINGWITHMesstoneCount On database_1;SELECTCOUNT(*)FROMdatabase_1;SELECTd.DatabaseMesstonei,o.Messtone AccessCntFROMDBC.Dbased,DBC.Object UsageWHEREd.Databaseid=o.DatabaseidANDd.DatabaseMesstone='database_1'; ***Query CompletedNOrowsfound-Notexpected.DR168873Fixedin:
TDBMS_16.0_GCATDBM...
CANNO
KCS006546-FAILURE3604CANNO
[ KNOWNLEDGE BASE ]
...Coalesce(12.b' '),t2.Cfromtest1+1leftjointest2+2ont2.b=t1.bWHERE(CASEWHEN(t2.bIsNULL)THEN'Y'ELSE 'N'END)='N';***Insert Completed.2rowsadded.***Totalelapsedtimewas2seconds.BTEQ-Enter your SQL request or BTEQ Command:dianostic "reducespooloff=1"onfor SESSION;***NULL Statement Accepted.***Total elapsed time was1second.BTEQ-Enter...
Latest comments