Posts

Embed JSX code with if condition in React app

     renderPrice(pt)  {    if (pt.amountpaid!="") {        let ctrl = (           <>             <div>                 <label>Price</label><br/>                 <input type="text" readOnly name="amountpaid" value={pt.amountpaid} />             </div>           </>          );          return ctrl;     } //end if } //end renderPrice render () {   const   priceElement  =  this . renderPrice ( purchasetransaction ); return  (                  < div   style = { { marginTop :   '5em' , marginLeft : '2em' } } >  ...

Reset root password on MySQL on Windows 10 laptop

Stop SQL Server Create a text file "mysql-init.txt" in C:\temp directory and paste in this content ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass'; Open a command prompt window and run this command mysqld --defaults-file="C:\\ProgramData\\MySQL\\MySQL Server 8.0\\my.ini" --init-file="C:\\temp\\mysql-init.txt"

Create a SQL table with autoincrement id and timestamps

CREATE TABLE `tutorials` (   `id` int(11) NOT NULL AUTO_INCREMENT,   `title` varchar(255) DEFAULT NULL,   `description` varchar(255) DEFAULT NULL,   `published` tinyint(1) DEFAULT '1',   `createdAt` timestamp NULL DEFAULT CURRENT_TIMESTAMP,   `updatedAt` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,   PRIMARY KEY (`id`) ) 

AWS Policy editor

https://awspolicygen.s3.amazonaws.com/policygen.html

Download files from Databricks Community Edition

Syntax is: https://community.cloud.databricks.com/files/myStuffs/yourdata.csv/part-00000?o=<yournumber>

Run pyspark on Juypter notebook

$which python $whereis python # install EPEL repository first $ sudo yum install epel-release # install python-pip $ sudo yum -y install python-pip sudo pip install --upgrade setuptools wget https://repo.anaconda.com/archive/Anaconda2-5.0.1-Linux-x86_64.sh sudo sh Anaconda2-5.0.1-Linux-x86_64.sh 1) Install PySpark pip install pyspark 2) Install Java 3) Install Jupyter notebook pip install jupyter 4) Install find pip install findspark %env SPARK_HOME=c:\spark # To find out where the pyspark import findspark findspark.init() # Creating Spark Context from pyspark import SparkContext sc = SparkContext("local", "first app") # Calculating words count text_file = sc.textFile("OneSentence.txt") counts = text_file.flatMap(lambda line: line.split(" ")) \              .map(lambda word: (word, 1)) \              .reduceByKey(lambda a, b: a + b) # Printing each word with its respective coun...

Useful icons

https://pngtree.com/free-icon/node-other-database-cluster_741016 https://www.clipartmax.com/