FuturoProssimo

TensorFlow: resoconto secondo incontro Meetup "Machine-Learning e Data Science" Roma (19 febbraio 2017)

170219-tensorflow.jpg

Il 16 febbraio 2017 si è svolto a Roma – presso il Talent Garden di Cinecittà - il secondo incontro del Meetup "Machine Learning e Data Science" (web, fb, slideshare): l’incontro - organizzato insieme al Google Developer Group Roma Lazio Abruzzo   - è stato dedicato alla presentazione di TensorFlow,  la soluzione di Google per il deep learning nel machine learning.

Nel seguito una breve sintesi dell’incontro del 16 febbraio 2017.

Prima di iniziare:

•    Cos’è il Machine Learning? Intervista a Simone Scardapane sul Machine Learning Lab (1 dicembre 2016)
•    Cos’è TensorFlow? Andrea Bessi, “TensorFlow CodeLab”, Nov 16, 2016

Resoconto

Premessa

Il 15 Febbraio 2017 a Mountain View (USA) si è tenuto il “TensorFlow Dev Summit” , il primo evento ufficiale di Google dedicato a TensorFlow, la soluzione di deep learning rilasciato circa un anno fa e giunta adesso alla versione 1.0 “production ready”.
Il “TensorFlow Dev Summit” è stato aperto da un keynote di Jeff Dean (wiki, web, lk) - Google Senior Fellow - Rajat Monga (tw) - TensorFlow leader nel Google Brain team - e Megan Kacholia (lk) Engineering Director del TensorFlow/Brain team.

Per approfonsire il #TFDevSummit 2017:

L’incontro del Meetup "Machine-Learning” di Roma è stata un’occasione per rivedere insieme e commentare il video e fare anche una breve presentazione di TensorFlow.
Alla fine dell'evento c’è stato un piccolo rinfresco aperto a tutti gli appassionati di deep learning a Roma.

Simone Scardapane: “TensorFlow and Google, one year of exciting breakthroughs”

Simone (mup, web, fb, lk) ha aperto l’incontro con una breve presentazione sul deep learning e TensorFlow.
“TensorFlow” ha detto Simone “ ha reso accessibili a tutti le reti neurali che sono alla base del deep learning e dell’intelligenza artificiale. Prima di TensorFlow c’era una oggettiva difficoltà nel gestire e allenare reti neurali vaste e complesse”.
Le moderne architetture di deep learning usano infatti reti neurali molto complesse: ad esempio nelle applicazioni di data imaging  i modelli architetturali prevedono decine di milioni di parametri.

170219-reteneurale.jpg

(Rete neurale, immagine tratta da http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html)

Uno dei grossi vantaggi di TensorFlow è che permette di definire una rete neurale in modo simbolico: lo strumento fornisce inoltre un compilatore efficiente che gestisce in automatico il processo di back-propagation.
TensorFlow può essere utilizzato inoltre con una interfaccia semplificata come Keras arrivando quasi ad una sorta di programmazione dichiarativa.
La prima release di TensorFlow – ha ricordato Simone - è stata rilasciata nel novembre 2015 con licenza aperta Apache 2.0  (cos’è?). Il 15 febbraio 2017 – durante il TFDevSummit - è stata annunciata la versione 1.0 di TensorFlow la prima “production ready”.
La disponibilità di un ambiente deep learning aperto e “user-friendly” ha permesso lo sviluppo di una vasta comunità di esperti, ricercatori e semplici appassionati e il rilascio applicazioni software di grande impatto. Simone ha mostrato alcuni esempi.

1) Neural image captioning: software in grado di riconoscere e descrivere o sottotitolare immagini.

170216-tensorflow-s2.jpg
 

170216-ragazzo-aquilone.jpg

170216-tren-b-n.jpg

2) Google Neural Machine Translation (GNMT)  che ha permesso il rifacimento di “Google translator” grazie al deep learning: invece di tradurre parola per parola ora è possibile analizza il testo nella sua interezza cogliendo significato e il contesto con un livello di accuratezza ormai vicino alla traduzione umana.

nmt-model-fast.jpg

170216-neural-translation.jpg
170216-neural-translation2.jpg

3) Generative Adversarial Networks (GANS) sistemi capaci di generare nuovi dati grazie a un emenorme “training set” e che lavorano con una coppia di reti neurali: la prima produce nuovi dati la seconda controlla la “bontà” del risultato; questi sistemi sono già stati usati per generare immagini artificiali, scenari per video-game, migliorare immagini e riprese video di scarsa qualità.

170216-generative-image

4) Alphago: il deep learning è anche alla base dei recenti spettacolari successi dell’IA nel campo dei giochi da tavolo come la vittoria di Alphago di Google  contro il campione del mondo di GO.

170216-alphago.jpg

5) WaveNet - a generative model for raw audio - capace di generare discorsi che imitano una voce umana con molta più “naturalezza” rispetto ai migliori sistemi di Text-to-Speech oggi esistenti. WaveNet è già stato utilizzato anche per creare musica artificiale.

Simone ha concluso il suo intervento ricordando che di deep learning e ML si parlerà anche in un track specifico alla Data Driven Innovation Roma 2017   che si terrà presso la 3° università di Roma il 24 e 25 febbraio 2017.

Sintesi del keynote di Jeff Dean, Rajat Monga e Megan Kacholia su TensorFlow

Il keynote di apertura del TF DevSummit 2017 condotto da Jeff Dean, Rajat Monga e Megan Kacholia  ha trattato:

  • origini e storia di TensorFlow
  • i progressi da quanto è stata rilasciata la prima versione opensource di TensorFlow
  • la crescente comunità open-source di TensorFlow
  • performance e scalabilityà di TensorFlow
  • applicazioni di TensorFlow
  • exciting announcements!

Jeff Dean

jeff dean.jpg

Jeff ha detto che l’obiettivo di Google con TensorFlow è costruire una “machine learning platform” utilizzabile da chiunque.
TensorFlow è stato rilasciato circa un anno fa ma le attività di Google nel campo del machine learning e deep learning sono iniziati 5 anni fa.
Il primo sistema realizzato – nel 2012 – è stato DistBelief un sistema proprietario di reti neurali adatto ad un ambiente produzione come quello di Google basato su sistemi distribuiti (vedi “Large Scale Distributed Deep Networks” in pdf). DistBelief è stato utilizzato in molti prodotti Google di successo come Google Search, Google Voice Search, advertising, Google Photos, Google Maps, Google Street View, Google Translate, YouTube.
Ma DistBelief aveva molti limiti: “volevamo un sistema molto più flessibile e general purpose” ha detto Jeff “che fosse open source e che venisse adottato e sviluppato da una vasta comunità in tutto il mondo e orientato non solo alla produzione ma anche alla ricerca. Così nel 2016 abbiamo annunciato TensorFlow, una soluzione capace di girare in molteplici ambienti compreso IOS, Android, Raspberry PI, capace di girare su CPU, GPU e TPU, ma anche sul Cloud di Goole ed essere interfacciata da linguaggi come Python, C++, GO, Hasknell, R”.

170216-tensorflow-keynote1.jpg
TensorFlow ha anche sofisticati tool per la visualizzazione dei dati e questo ha facilitato lo sviluppo di una vasta comunità open source intorno a TensorFlow.

170216-tensorflow-keynote2.jpg

Rajat Monga

rajatmonga2.jpg

Rajat Monga ha ufficialmente annunciato il rilascio della versione 1.0 di TensorFlow illustrandone le nuove caratteristiche.

170216-tensorflow-keynote3.jpg

170216-tensorflow-keynote4.jpg

Rajat ha poi illustrato le nuove API di TensorFlow

170218-tf-model.jpg

TensorFlow 1.0 supporta IBM's PowerAI distribution, Movidius Myriad 2 accelerator, Qualcomm SnapDragon Hexagon DSP. Rajat ha annunciato anche la disponibilità di XLA an experimental TensorFlow compiler  specializzato nella compilazione just-in-time e nei calcoli algebrici.

170218-tf-xla.jpg

Megan Kacholia

megankacholia2.jpg

Megan Kacholia ha approfondito il tema delle performancedi TensorFlow 1.0.

170216-tensorflow-keynote7.jpg

In ambiente di produzione si possono utilizzare  molteplici architetture: server farm, CPU-GPU-TPU, server a bassa latenza (come nel mobile) perché TensorFlow 1.0 è ottimizzato per garantire ottime performance in tutti gli ambienti.

170218-tf-performance1.jpg

Megan ha poi illustrato esempi dell’uso di TensorFlow in ricerche d'avanguardia - cutting-edge research – e applicazioni pratiche in ambiente mobile.

170218-tf-research2.jpg

170218-tf-research1.jpg

Conclusione del keynote

In conclusione del Keynote è di nuovo intervenuto Jeff per ringraziare tutti coloro che contribuiscono alla comunità di TensorFlow pur non facendo parte di Google.
“Dalla comunità” ha detto Jeff “arrivano suggerimenti, richieste e anche soluzioni brillanti a cui in Google non avevamo ancora pensato” citando il caso di un agricoltore giapponese che ha sviluppato un’applicazione con TensorFlow su Raspberry PI per riconoscere i cetrioli storti e scartarli nella fase di impacchettamento.
Nel campo della medicina – ha ricordato Jeff – TensorFlow è stato usato per la diagnostica della retinopatia diabetica (qui una sintesi) e all’università di Stanford per la cura del cancro della pelle .

Contatti

•    Meetup “Machine-Learning e Data Science” di Roma: - sito web e pagina Facebook

Approfondimenti

Video

Ulteriori informazioni su TensorFlow

Leggi anche

AG-Vocabolario: 

Resoconto primo incontro Meetup "Machine-Learning e Data Science" (3 febbraio 2017)

170203-ml-meetup.jpg

Il 2 febbraio 2017 si è svolto a Roma il primo incontro del Meetup "Machine-Learning e Data-Science"  (fb) presso la Sala Storica di LUISS ENLABS.

Agenda

  • Simone Scardapane e Gabriele Nocco, presentazione del Meetup "Machine-Learning e Data Science"
  • Gianluca Mauro (AI-Academy): Intelligenza artificiale per il tuo business
  • Lorenzo Ridi (Noovle): Serverless Data Architecture at scale on Google Cloud Platform

Simone Scardapane e Gabriele Nocco, presentazione del Meetup "Machine-Learning e Data Science"

161022-simone-scardapane.jpg

(Simone Scardapane)

Simone (mup, web, fb, lk) ha rapidamente illustrato le finalità del Meetup "Machine-Learning e Data-Science" 
“Vogliamo creare una comunità di appassionati e professionisti di ML, AI e Data Science” ha detto Simone, “un luogo dove trovare risposte a domande quali:

  1. Sono appassionato di Ml, dove trovo altri esperti?
  2. Cerchiamo qualcuno esperto di ML, ne conosci?
  3. Mi piacerebbe avvicinarmi a ML, come faccio?”

gabrielenocco.jpeg

(Gabriele Nocco)

Gabriele Nocco (mup , fb, lk) ha annunciato che il secondo evento del Meetup si terrà a Roma il 16 febbraio 2017 al Talent Garden di Cinecittà (mappa) in collaborazione con il Google Dev Group di Roma . Per partecipare occorre registrarsi – gratuitamente – a EventBrite.
“Parleremo di TensorFlow  e proietteremo il keynote ed alcuni momenti salienti del primo TensorFlow Dev Summit per tutti gli appassionati di deep learning e, grazie anche alla gentile sponsorship di Google, avremo il nostro secondo momento di networking condiviso nei bellissimi spazi a nostra disposizione” ha detto Gabriele.
innocenzo-sansone.jpg

(Innocenzo Sansone)

È intervenuto anche Innocenzo Sansone (fb, tw , lk)  – tra gli organizzatori e sponsor – che ha ricordato che il 24 e 25 marzo 2017 a Roma avrà luogo Codemotion  nel quale è previsto – tra gli altri – anche un track specifico sul Machine Learning.

Gianluca Mauro (AI-Academy ): Intelligenza artificiale per il tuo business

170203-gianluca-mauro2.jpg

(Gianluca Mauro)

Gianluca (blog, lk) – ingegnere, imprenditore, esperto di AI, ML e Data Science – è anche uno dei 3 fondatori – insieme a Simone Totaro  (lk)  e Nicolò Valigi (lk) - di AI-Academy  una startup che si prefigge di favorire l’utilizzo dell’Intelligenza Artificiale nei processi di business aziendali (vedi AI Academy manifesto).

ai-academy.jpg

Breve storia dell’Intelligenza Artificiale

Nella prima parte del suo intervento Gianluca ha delineato lo sviluppo storico dell’Intelligenza artificiale.
Gli inizi della IA si devono alla conferenza tenutesi a Dartmouth – USA - nel 1956  ed organizzata da John McCarthy, Marvin Minsky, Nathaniel Rochester e Claude Shannon: per la prima volta si parla di “intelligenza artificiale” e viene indicato l’obiettivo di “costruire una macchina che simuli totalmente l’intelligenza umana” proponendo temi di ricerca che negli anni successivi avranno un grande sviluppo: reti neurali, teoria della computabilità, creatività e elaborazione del linguaggio naturale.

dartmouth.jpg
 
Fonte immagine: http://www.scaruffi.com/mind/ai.html

La ricerca IA viene generosamente finanziata dal governo statunitense fino alla metà degli anni 60: di fronte alla mancanza di risultati concreti i finanziamenti cessano dando origine al primo “AI winter” (1966 – 1980).
Negli anni 80 l’IA riprende vigore grazie a un cambio di paradigma: invece di inseguire l’obiettivo di riprodurre artificialmente l’intera intelligenza umana si ripiega sulla realizzazione di “Sistemi esperti” in grado di simulare le conoscenze in ambiti delimitati.
Anche questo 2° tentativo ha però scarsa fortuna causando il nuovo “AI winter” che si protrae fino agli inizi degli anni 90 quando comincia a imporsi una nuova disciplina: il Machine Learning.

Cos’è il Machine Learning?

AI.jpg

Fonte immagine: http://www.thebluediamondgallery.com/tablet/a/artificial-intelligence.html

Il Machine Learning – ha spiegato Gianluca – è una branca dell’Intelligenza Artificiale che si propone di realizzare algoritmi che a partire dai dati ricevuti in input si adattino in maniera automatica così da produrre risultati “intelligenti” quali previsioni e raccomandazioni.
Gianluca ha fatto l’esempio di un bambino che impara a camminare: non serve conoscere la legge di gravità ed è sufficiente osservare come cammina la mamma e riprovare fino a che non si trova l’equilibrio.

Cos’è il Deep Learning?

Il deep learning è un sottoinsieme del Machine Learning e si rivolge alla progettazione, allo sviluppo, al testing e soprattutto al traning delle reti neurali e di altre tecnologie per l’apprendimento automatico.
Il deep learning è alla base degli spettacolari successi dell’IA nel campo dei giochi da tavolo: la vittoria agli scacchi di Deep Blue di IBM contro il campione del mondo in carica, Garry Kasparov,  e la vittoria di Alphago di Google contro il campione del mondo di GO .

This is the golden age of Artificial Intelligence

Secondo Gianluca Mauro questo è il momento magico per l’IA perché finalmente abbiamo gli strumenti – algoritmi, data, computing power – necessari per realizzare applicazioni di ML a costi sempre più bassi.
Gli algoritmi sono ormai collaudati grazie ai lavori pubblicati negli ultimi anni a cominciare da quelli di Corinna Cortes (“Support-vector networks”) e Davide Rumelhart (“Learning representations by back-propagating errors”).
Il computing power è rappresentato principalmente dalla grande quantità di tecnologie open source a disposizione.
La combinazione di tutti questi fattori è rivoluzionario come dice Chris Dixon, uno dei più noti esponenti del Venture capital USA:
“La maggior parte degli studi di ricerca, degli strumenti e degli strumenti sw legati al ML sono open source. Tutto ciò ha avuto un effetto di democratizzazione che consente a piccole imprese e addirittura a singoli individui di realizzare applicazioni veramente potenti. WhatsApp è stato in grado di costruire un sistema di messaggistica globale che serve 900 milioni di utenti assumendo solo 50 ingegneri rispetto alle migliaia di ingegneri che sono stati necessari per realizzare i precedenti di sistemi di messaggistica. Questo "effetto WhatsApp" sta accadendo adesso nell’Intelligenza Artificiale. Strumenti software come Theano e TensorFlow, in combinazione con i cloud data centers per i training, e con le GPU a basso costo per il deployment consentono adesso a piccole squadre di ingegneri di realizzare sistemi di intelligenza artificiale innovativi e competitivi”.
Secondo Gianluca l’IA presto sarà una necessità per qualsiasi azienda o per citare Pedro Domingos: “A company without Machine Learning can’t keep up with one that uses it”.
Secondo Andrew Ng, chief scientist in Baidu, AI e ML stanno già trasformando le imprese perché le obbligheranno a rivoluzionare i loro processi produttivi così come accadde nell’800 quando fu disponibile per la prima volta elettricità a basso costo (video).
Questo cambiamento culturale è già avvertibile nel Venture Capital e nel Merger & Acquisition: le grandi imprese non cercano solo startup che si occupano di ricerca pura nell’ML ma startup che realizzano servizi e prodotti con ML embedded.
“Siamo all’alba di una nuova era” ha concluso Gianluca “quella del Machine Learning as a feature”.

Lorenzo Ridi (Noovle): Serverless Data Architecture at scale on Google Cloud Platform

lorenzo-ridi.jpeg

(Lorenzo Ridi)

Lorenzo Ridi (mup, fb, lk),  tw) ha presentato un caso d’uso concreto (qui disponibile nella sua versione integrale , anche su SlideShare) per mostrare i vantaggi di usare l’architettura su Google Cloud Platform, attraverso sole componenti serverless, in applicazioni con Machine-Learnin embedded.
Il caso d’uso riguarda una società che con l’avvicinarsi del Black Friday decide di commissionare un’indagine sui social, e in particolare su Twitter, per catturare insights utili a posizionare correttamente i propri prodotti, prima e durante l’evento: questo è tanto più cruciale quanto si considera l’enorme dimensione del catalogo aziendale perché indirizzare in modo sbagliato la propria campagna pubblicitaria e promozionale sarebbe un errore fatale.
Tuttavia, per gestire il forte traffico atteso durante l’evento, gli ingegneri di ACME decidono di abbandonare le tecnologie tradizionali, e di implementare questa architettura su Google Cloud Platform, attraverso sole componenti serverless:

170203-google-architecture-ml1.jpg
 
Ingestion

Per recuperare i dati viene implementata una semplice applicazione Python che, attraverso la libreria TweePy, accede alle Streaming API di Twitter recuperando il flusso di messaggi riguardanti il Black Friday e le tematiche ad esso connesse.
Per fare in modo che anche questa componente mantenga gli standard di affidabilità prefissati, si decide di eseguirla, all’interno di un container Docker, su Google Container Engine, l’implementazione di Kubernetes su Google Cloud Platform. In questo modo, non dovremo preoccuparci di eventuali outage o malfunzionamenti. Tutto è gestito (e all’occorrenza automaticamente riavviato) da Kubernetes.

170203-google-architecture-ml2.jpg
 
Innanzitutto creiamo l’immagine Docker che utilizzeremo per il deploy. A questo scopo è sufficiente redigere opportunamente un Dockerfile che contenga le istruzioni per installare le librerie necessarie, copiare la nostra applicazione ed avviare lo script:

170203-google-architecture-ml3.jpg
 
Et voilà! L’immagine Docker è pronta per essere eseguita ovunque: sul nostro laptop, su un server on-prem o, come nel nostro caso, all’interno di un cluster Kubernetes. Il deploy su Container Engine è semplicissimo, con il tool da riga di comando di Google Cloud Platform: tre sole istruzioni che servono a creare il cluster Kubernetes, acquisire le credenziali di accesso ed eseguire l’applicazione in modo scalabile ed affidabile all’interno di un ReplicationController.
Il secondo elemento della catena, la componente cioè verso la quale la nostra applicazione invierà i tweet, è Google Pub/Sub. una soluzione middleware fully-managed, che realizza un’architettura Publisher/Subscriber in modo affidabile e scalabile.
Nella fase di processing, utilizziamo altri due strumenti della suite Google Cloud Platform:

  • Google Cloud Dataflow è un SDK Java open source – adesso noto sotto il nome di Apache Beam – per la realizzazione di pipeline di processing parallele. Inoltre, Cloud Dataflow è il servizio fully managed operante sull’infrastruttura Google, che esegue in modo ottimizzato pipeline di processing scritte con Apache Beam.
  • Google BigQuery è una soluzione di Analytic Data Warehouse fully managed. Le sue performance strabilianti, che abbiamo avuto modo di sottolineare più volte, lo rendono una soluzione ottimale all’interno di architetture di Data Analytics.

La pipeline che andiamo a progettare è estremamente semplice. Di fatto non farà altro che trasformare la struttura JSON che identifica ogni Tweet, inviata dalle API di Twitter e recapitata da Pub/Sub, in una struttura record BigQuery. Successivamente, attraverso le BigQuery Streaming API, ogni record verrà scritto in una tabella in modo tale che i dati possano essere immediatamente analizzati.
 170203-google-architecture-ml4.jpg
Il codice della pipeline è estremamente semplice; questo è in effetti uno dei punti di forza di Apache Beam rispetto ad altri paradigmi di processing, come MapReduce. Tutto ciò che dobbiamo fare è creare un oggetto di tipo Pipeline e poi applicare ripetutamente il metodo apply() per trasformare i dati in modo opportuno. È interessante osservare come i dati vengano letti e scritti utilizzando due elementi di I/O inclusi nell’SDK: PubSubIO e BigQueryIO. Non è quindi necessario scrivere codice boilerplate per implementare l’integrazione tra i sistemi.

Machine learning

Per visualizzare graficamente i risultati utilizziamo Google Data Studio, uno strumento della suite Google Analytics che consente di costruire visualizzazioni grafiche di vario tipo a partire da diverse sorgenti dati, tra le quali ovviamente figura anche BigQuery.
Possiamo poi condividere le dashboard, oppure renderle pubblicamente accessibili, esattamente come faremmo con un documento Google Drive.

170203-ml5.jpg
 
In questo grafico è riportato il numero di Tweet collezionato da ogni stato dell’Unione. Sicuramente d’impatto, ma non molto utile per il nostro scopo. In effetti, dopo un po’ di analisi esplorativa dei dati, ci accorgiamo che con i soli tweet collezionati non riusciamo a fare analisi molto “avanzate”. Dobbiamo quindi rivedere la nostra procedura di processing per cercare di inferire qualche elemento di conoscenza più “interessante”.
Google Cloud Platform ci viene in aiuto, in questo caso offrendoci una serie di API, basate su algoritmi di Machine Learning, il cui scopo è esattamente aggiungere un pizzico di “intelligenza” al nostro processo di analisi. In particolare utilizzeremo le Natural Language API, che ci saranno utili per recuperare il sentiment di ogni tweet, cioè un indicatore numerico della positività (o negatività) del testo contenuto nel messaggio.

170203-google-architecture-ml6.jpg
 
La API è molto semplice da usare: prende in ingresso un testo (il nostro tweet) e restituisce due parametri:

  • Polarity (FLOAT variabile da -1 a 1) esprime l’umore del testo: valori positivi denotano sentimenti positivi.
  • Magnitude (FLOAT variabile da 0 a +inf) esprime l’intensità del sentimento. Valori più alti denotano sentimenti più forti (siano essi rabbia o gioia).

La nostra personale semplicistica definizione di “sentiment” altro non è che il prodotto di questi due valori. In questo modo siamo in grado di assegnare un valore numerico ad ogni tweet – ed auspicabilmente, di tirarne fuori delle statistiche interessanti!
La pipeline Dataflow viene modificata in modo da includere, oltre al flusso precedente, anche questo nuovo step. Tale modifica è molto semplice, e visto il modello di programmazione di Cloud Dataflow, permette un notevole riuso del codice esistente.

170203-google-architecture-ml7.jpg
 

Con questi nuovi dati possiamo realizzare delle analisi molto più interessanti, che ci informano sulla distribuzione geografica e temporale del “sentimento” riguardante l’evento Black Friday.
La mappa che segue, ad esempio, mostra il sentiment medio registrato in ognuno degli stati degli US, colori più scuri rappresentano sentiment più negativi (quel quadrato rosso là in mezzo è il Wyoming).

170203-google-architecture-ml8.jpg
 
Quest’altra analisi invece riporta l’andamento del sentiment legato ai tre maggiori vendor statunitensi: Amazon, Walmart e Best Buy. A partire da questa semplice analisi, con un po’ di drill-down sui dati, siamo riusciti a carpire alcuni fatti interessanti:

  • il popolo di Twitter non ha apprezzato la decisione di Walmart di anticipare l’apertura delle proprie vendite al giorno precedente il Black Friday, la festa nazionale del Thanksgiving Day. La popolarità di Walmart è stata infatti minata fin dai primi di Novembre da questa decisione  – d’altronde, la tutela dei lavoratori è un tema universale.
  • Le vendite promozionali di Amazon (aperte il 18 Novembre, quindi con anticipo rispetto al Black Friday) sono state inizialmente duramente criticate dagli utenti, con un crollo della popolarità che ha raggiunto il suo minimo il 22. In seguito però il colosso delle vendite online ha recuperato terreno rispetto a Best Buy, che invece sembra aver mantenuto intatta la sua buona reputazione per tutto il periodo.

170203-google-architecture-ml9.jpg

Contatti

Leggi anche

AG-Vocabolario: 

“From International Relations to Global Politics” - Interview with prof. Raffaele Marchetti (February 18th, 2015)

150218-iversity-ir.jpg
Agatino Grillo: Welcome professor Marchetti. Could you introduce yourself and the Mooc course “From International Relations to Global Politics” of LUISS University.

Raffaele Marchetti: I am an assistant professor (or associate professor using the Italian academic qualifications) in International Relations at the Department of Political Science at the School of Government of LUISS, Libera Università Internazionale degli Studi Sociali Guido Carli (Guido Carli Free International University for Social Studies) one of the most important Italian universities.
I got my degree in Rome University “La Sapienza” and a Phd at the London School of Economics (LSE). For other information you may follow me on twitter at https://twitter.com/rafmarchetti or visit my personal website at http://docenti.luiss.it/rmarchetti/
From International Relations to Global Politics” is the first MOOC - Massive Open Online Course - of LUISS: it started on November 16th, 2015 and will finish after 10 weeks.
The course is held in English so the only requirement to participate is a laptop/desktop computer with a stable internet connection and a very good understanding of English.
I am very excited about it. Not only is one of the first MOOCs on IR, but we already have thousands of enrollments. The fact that there are so many people interested in understanding better and discussing international affairs is simply great. I think this has a huge potential for creating a cross-cultural learning community.

Agatino Grillo:  Could you better explain what is a MOOC?

Raffaele Marchetti: As stated in Wikipedia a MOOC is an online course addressed to unlimited attendants  and open access via web. In addition to traditional course materials such as filmed lectures, readings, and problem sets, many MOOCs provide interactive user forums to support community interactions between students, professors, and teaching assistants (TAs). MOOCs are a recent development in distance education which was first introduced in 2008 and is emerging as the most popular mode of learning nowadays.

Agatino Grillo: What is the format of LUISS “From International Relations to Global Politics” course?

Raffaele Marchetti: Starting from February 16, 2015 every week I will upload a number of videos with the related quizzes which you need to do in order to move on.
Furthermore you will find additional materials and references for your consultation.
Also, from time to time you will have the possibility to discuss in public some contentious issues and to write short essays that will be marked in a peer-to-peer mode. These are optional, however it is an opportunity to learn and engage with the others.

Agatino Grillo:  What is the syllabus?

Raffaele Marchetti: The course is divided into ten units each of them has its own sub-units.
The ten units or chapters are:
Chapter 1  - How to study International Relations
Chapter 2 - Realism
Chapter 3 - Liberalism
Chapter 4 - Marxism and Constructivism
Chapter 5 - Foreign Policy Analysis
Chapter 6 - International Political Economy
Chapter 7 - Security studies
Chapter 8 - Globalization and the context of global politics
Chapter 9 - Global Politics
Finally there will be an optional exam scheduled from April 20th to May 4th 2015

Agatino Grillo: Are there prerequisites?

Raffaele Marchetti:  Just two suggested pre-readings:

1.    Morgenthau, H.J., Politics among Nations: the Struggle for Power and Peace. New York: Knopf, 1985 (available here in pdf format)
2.    Khanna, P., How to Run The World: Charting a Course to the Next Renaissance. New York: Random House, 2010 (available here in pdf format)

Agatino Grillo: Is the course free or it has to be paid?

Raffaele Marchetti: Our Mooc course proposes three different tracks:

  1. Audit Track: it is completely free and includes all course material, possibility to participate at course community, statement of participation
  2. Certificate Track: it costs 49 Euros and add to previous track features, a graded online exam, a certificate of accomplishment and a certificate supplement. A “Certificate of Accomplishment” is an identity-verified certificate with your final grade, signed by the instructor. If you achieve a top 10% grade, this will be noted on your certificate. All our certificates are all identity verified. A “Certificate Supplement”contains detailed information about the course content and your learning achievements, backing up your hard work with the details employers may ask for.
  3. ECTS Track: it costs 149  Euros. As with the Audit Track, you get access to all the course materials, lecture videos, assignments, and more. However, unlike the audit-only option, the ECTS-Track also gives you access to an on-site exam at the end of the course. This exam will be professionally graded by the course instructor. This gives you a chance to showcase the skills you’ve acquired. After successfully completing the on-site exam, you will receive an identity-verified Certificate of Accomplishment with your final grade and 3 ECTS-Points, signed by the instructor. If you achieve a top 10% grade, this will be noted on your certificate.

ECTS or European Credit Transfer and Accumulation System is a standard for comparing the study attainment and performance of students of higher education across the European Union and other collaborating European countries.

Agatino Grillo: Why study the “International Relations?”

Raffaele Marchetti:  Globalization has a considerable influence on contemporary politics. In my opinion today is easier to understand the national and local political positions if we consider them respect to issues of international politics and globalization: integration of markets, delegation of sovereignty, global challenges.

Agatino Grillo: Does it make sense to define oneself as right-wing or left-wing?

Raffaele Marchetti: Inside both sides we have to distinguish those who have open positions towards globalization and who contrasts it. This helps us to understand for example why today in many countries there are centrist governments formed by large coalitions against oppositions of anti-globalist and localist.
Another example comes from the recent European elections in May 2014, where the Eurosceptic parties from the left and right have joined forces against the pro-Europeans.

Agatino Grillo: Thank you very much for your time professor Marchetti.

Raffaele Marchetti: Thank you

140223-MOOC_poster_mathplourde

Related posts on MOOC

Link

Videos

AG-Vocabolario: 

LUISS University launches its first MOOC, Massive Open Online Courses, on International Relations (February, 16th 2015)

150217-raffaele-marchetti.jpg

About LUISS

LUISS – Libera Università Internazionale degli Studi Sociali Guido Carli, or the Guido Carli Free International University for Social Studies – is an independent Italian university.
LUISS offers an innovative educational approach at its four Departments:

  1. Economics and Finance,
  2. Business and Management,
  3. Law,
  4. Political Science.

Its goal is not simply to convey knowledge but to “instill flexibility” in young people, giving them a sense of mastery over their future.

First MOOC at LUISS

On February 16th, 2015 LUISS launched its first online course, in International Relations: a series of twelve weekly online lectures entitled From International Relations to Global Politics, with Professor Raffaele Marchetti, professor of International Relations at the Department of Political Science.
MOOCs, or Massive Open Online Courses, are a format for free distance learning that is widely used by major universities in order to reach a large number of users around the world.
“It is an innovation of recent years,” says Marchetti, “a revolution that is slowly but decisively taking hold, reshaping the concept of teaching at university.”
LUISS is one of the first Italian universities to move in this direction and the course in International Relations is one of the first online courses on that topic offered in the world.
In their passage from university classrooms to e-learning platforms, the course’s content and methods had to be radically rethought.
“The production phase required us to rewrite the course in the form of a script in order to be as concise and understandable as possible, by breaking down the lectures into a number of small units between five and ten minutes long.”
The recording of lectures was done at the Rai (Italian public television) studios in Milan so as to recreate all of the video environments and settings using computer graphics.
“It was essential to use not only words but images, videos and a whole range of media to encourage interaction with my virtual students and make the class more dialogue-based.”
Aside from attending virtual classes, students will study additional materials and carry out a series of peer-to-peer tutorials and reviews (“assessments that are less 'official' than those of the professor, but which have the value of including many points of view”), to prepare for a final exam which, if passed, will lead to a certificate and 3 ECTS university credits.
The online course From International Relations to Global Politics already counts several thousand enrolled students from around the world: “From Germany to India, from Spain to the United States, with a fairly high percentage of students even in cities like Kabul.”
The course will remain active for about three months, to then be relaunched in the fall, along with a new MOOC in Management with Professor Andrea Prencipe, which is already being prepared.

Video

(1,13 minutes)

 

AG-Vocabolario: 

Job in the Time of Web – Interview with Massimo Chiriatti (November 24, 2014)

(This post is part of a serie on GDG Rome DevFest 2014, Italian translation available here)

massimochiriatti.png

Agatino Grillo: Hello Massimo. Would you like to introduce yourself?

Massimo Chiriatti: Hello everyone. I am a technologist, blogger and expert in the digital economy and blogger for “Il Sole 24 Ore” the most important Italian business newspaper. I have a degree in Political Science and a Masters in Information Systems Governance. My primary focus is on the areas of intersection between technology and the digital economy.
You can contact me via email, Twitter, Facebook, Linkedin.

locandina2-devfest-2.png

Agatino Grillo: Why did you participate in Google DevFest in Rome last November 8, 2014?

Massimo Chiriatti: The friends of the Rome Google Developer Group invited me to participate as a speaker at the morning conference where various speakers related their experiences that the new technologies and innovations are bringing to their particulars fields.

Agatino Grillo: Your speech was entitled “Work games” (here available in pdf) a tribute to the famous eighties movie “War Games” (here on Wikipedia)" and at the same time a warning that there is increasing competition between man and machine also in the workplace.

141124-work-games.jpg

Massimo Chiriatti: In my presentation I started by noting that more and more machines and advanced software are able to replace work activities previously managed only by humans. This began with the automation of repetitive jobs - typically: agriculture and manufacturing assembly lines in the manufacturing sector – and then changed the computerization of work with high information content: airplane ticket sales and check-ins, virtualized stores such Amazon, online banks and insurance services. The immediate future is represented by machines with predictive capabilities based on the storage of so called big-data that thanks to “abductive” logic software will be capable of making diagnose based on symptoms, for example. My provocative conclusions are that it is not convenient any more to compete with machines: rather we have to work with them to increase and multiply output and share them more equally among humanity.

Agatino Grillo: What does it mean?

Massimo Chiriatti: Nowadays technological innovations are increasingly sophisticated and less expensive so companies obviously try to replace human labour wherever possible with the work of  machines; the only areas where this is not possible yet are those related to creative work and those where empathy with other human beings still plays a vital role. So to preserve jobs or to find one it is important to be innovative, to be able to communicate one’s own capabilities, to be able to work in teams and online in communities. In other words: the middle class is destined to disappear due to this new social, economic and cultural system that is in continuous change and does not require unskilled workers but specialized talents. The Network offers us the possibility to be more visible and rapidly engaged by companies in search of our abilities. But be careful: the web allows everyone to show off their abilities thus exacerbates the competition.

Agatino Grillo: So?

Massimo Chiriatti: Before, anyone looking for work had to review the ads in newspaper or on the Web, now it is an algorithm that searches for the keywords. It is not the same thing. For years we put our CV online so that we are now in a data matrix. So when a job is matched with an application, we will be found and this will be announced by a message on our Smartphone

Agatino Grillo: What about the risk of excessive professional individualism? Is Italy ready for these challenges? Does Italy now being increasingly marginalized by big corporations?  

Massimo Chiriatti: It must be clear that the world changes despite our fears. The facts are these: large enterprises are increasing number, they are decentralizing e and have a shorter life. From 1988 to 2007 they have doubled, we have gone from 25 million to 50 million companies worldwide. These are also the benefits of globalization, which are often overlooked, because often new jobs are not created in our territory. The key drivers of this trend are the falling costs of both the transport of goods and information, both processing and storage. In other words: money has always been moved where it will yield maximum profit and today it moves where there are the best minds, not where labor costs less or in a defined area. We have mistakenly still the idea that companies are very old but it is not so: the biggest enterprises are born about ten years ago; at this rate try to imagine what - and especially who - will be starting up companies in the next decade. Now capital - and technology – find opportunities in start-ups and little companies even created by a single talented person.

Agatino Grillo: Thanks Massimo

Massimo Chiriatti: Thank you all

Slides of Massimo at Rome Google DevFest 2014

  • Massimo Chiriatti, “Work Games”, Google DevFest 2014, Roma , 8 novembre 2014 (pdf, 14 slide, 790 K)

Insights

How to contact Massimo Chiriatti

Connected posts

Let's build an immersive storytelling - Interview with Serena Zonca (November 12th, 2014)

(This post is part of a serie on GDG Rome DevFest 2014, Italian translation available here)
serena.jpeg

Agatino Grillo: Hi, Serena. Do you want to introduce yourself quickly?

Serena Zonca: Hello everyone. My name is Serena Zonca. I was born in Bergamo, Italy, on 1967. A degree in Foreign Languages and Literature, journalist. My job is on paper and digital publishing. I am founder of www.autopubblicarsi.it  a web site dedicated to new forms of publishing, and in particular to self-publishing. I like to experiment and I think ebooks are the starting point of the new publishing universe. More information about me can be found on my blog. Obviously I can be contacted via Facebook, Linkedin and Twitter  or directly through my website.

twitter-gdg-4.png

Agatino Grillo: What were you doing at Google DevFest Rome last November 8, 2014 among programmers, engineers and various nerds?

Serena Zonca: Rome Google Developer Group asked me for a contribution to the general conference of the morning on how new technologies are transforming the way people work and so I gave a speech on the subject of virtual reality and the “virtual storytelling”.

Agatino Grillo:  Let’s begin with virtual reality. For many years VR has promised to revolutionize our lives and our way of interacting with digital universes but actually isn’t it is a big disappointment? Are there significant results yet or not?

Serena Zonca: VR has moved a long way since it debuted. As I said in my speech at Google conference, I experienced for the first time virtual reality in 1992 in an event titled “Virtual Workshops” organized by the Triennale in Milan. I had to wear a helmet and a huge belt, connected by a bunch of cables to a computer which at the time seemed monstrous and today would evoke tenderness to tell the truth. I was ashamed not just at the idea of winding myself in that way, also because a small crowd laughed watching the volunteers so masked. But, once connected, I found myself in a new virtual universe: a white room, bare, some little airplanes were suspended in the air, a handful of polygons. The pixels were as big as pancakes and the frame rate of 4-5 fps. But ... I was there. The room existed, it was true! Since then I am always interested in virtual reality.

Agatino Grillo: Can you explain what “virtual storytelling” is?

Serena Zonca: Telling stories is part of the nature of human beings and therefore storytelling is a proven and powerful mechanism to capture attention, communicate and generate emotions, as advertising well knows! Despite being an anthropological constant, even the narrative evolved over time. It is passed from oral form of primordial stories to writing form taking the forms of theater, novel, screenplay film and television, the track of a video game. It has been constantly adapting  to the technical means from time to time available.
The challenge of “virtual storytelling” is to merge the narrative with current technology, including virtual reality, to create a new narrative means that you will no longer receive passively as the current books or movies, but in an interactive way: the stories will not be limited as now by a beginning and an end but propose multiple junctions and endings. To realize this “virtual storytelling” writers would work with software developers, digital graphics, 3D modelers ...

Agatino Grillo: What is the connection between “virtual storytelling” and virtual reality?

Serena Zonca: The connection is the immersion, which is another constant of the cultural history. Human beings have always tried to recreate the illusion of worlds and be part of them, to be surrounded by 360 °. The earliest examples date back to classical antiquity: I refer to the frescoes of the Villa of Livia at Prima Porta, in Rome, the trompe l'oeil of the Room of the Villa Farnesina in Rome and ever so many other examples up to the Cinerama and Sensorama in sixties of 1900.
Thus we come to 1992 mentioned earlier. Virtual reality promised an immersive revolution which hasn’t been realized yet. The computers were too slow; the display, the position detectors, the gloves too expensive and cumbersome. The princely progress  of VR has suffered a setback before that the apocalyptic predictions of detractors had time to occur.
Today, however, the context returns to appear favorable. On one hand we assist to the advancement of technology and on the other hand the user interfaces are today more friendly and people easily deal with computers and the network.
Today a VR platform is available and it costs more or less one thousand of euros:Oculus Rift, Microsoft Kinect and Virtuix Omni. It permits everyone to enter into an artificial world moving and interacting naturally with its components.

141112-sensorama.jpg

Agatino Grillo: How do you think the publishing industry will evolve in the coming years? Many fear that new technologies can wipe out the world we know now, and have justified fears. In 10 years will there still books? Or will they die out as vinyl records did?

Serena Zonca: The world does not stop for our fears. The real challenge is managing this change not to oppose it. I'm sure that paper books will continue to exist for decades. What we need to work is to build new languages and new tools that allow us to exploit the opportunities that technology offers us today and to meet the needs of a new audience. For this I was very pleased to attend the Google meeting and to confront with Google engineers who design digital tools essential to the cultural development of mankind.

google-devfest-2.jpg

Agatino Grillo: Is there any specific project you're working at on this front?

Serena Zonca: Just at Google Rome DevFest we launched the proposal to create a true “story immersive” ebook thanks to the power and wealth of the community that this type of event collects. Another goal would be to develop new tools to bring the authors to immersivity  and give them major autonomy in experimenting new languages. Immersive storytelling requires, I think, sharing cultural and technological experiences.

Agatino Grillo: Thanks Serena

Serena Zonca: Thank you all.

Slides of the presentation

  • Serena Zonca, “Virtual Storytelling”, Google DevFest 2014, Roma , 8 novembre 2014

How contact Serena Zonca

Connected posts

Google DevFest Rome 2014 (University Roma TRE, November 8, 2014)

(This post is part of a serie on GDG Rome DevFest 2014, Italian translation available here)

locandina2-devfest-2.png

  • photos & images taken by Google+ and Twitter

On November 8, 2014 I participated to the Developer Fest hosted by the Department of Engineering at the University Roma TRE and organized by Rome Google Developer Group (GDG).
I would like to get an idea of what was cooking in the Big G the “best company in the world” according to Fortune 2014 and a “Great Place to Work” as many worldwide job classifications.
I only attended the conference in the morning which was more “philosophical” than technological and not the “code-labs” scheduled in the afternoon which proposed practical sessions focused on the various development environments and solutions offered by Google.
Entering the conference-room I expected a formal context (read: boring!) as usual in these slide-centered meetings but - surprise ! –instead it was a sort of mega cheerful and improvised assembly unlacking funny incidents (Murphy's Law!): a microphone that did not work, fire alarm siren that sounded at inopportune moments, and so on.
Here are my impressions and notes!

locandina-devFestSmall-2.jpg

The agenda

The program, strict and multicolored as the Google homepage, simply recited:

     9:30 Welcome - Paul Merialdo and Antonella Blasetti
     10:00 Serena Zonca - Virtual Storytelling
     10:45 Massimo Chiriatti - Work Games
     11:30 Darius Arya - Ancient (and Modern) Rome live
     12:15 Sergio Paolini - The Web and neurosurgery
     13:00 The Lab and the Group's activities
     14:30 The Lab
     17:00: Treasure Hunt

Paolo Merialdo and Antonella Blasetti: get the party started!

paolo-merialdo.png

The conference began with Paolo, professor at the University Roma TRE and starter-up, who gave the welcome to all present and Antonella (the “boss” of the event), which clarified the spirit of the conference: the Google DevFest is not a formal lecture-meeting but an interactive party where people meet to exchange experiences, strengthen motivation and plan joint activities to do together.

antonella-blasetti.jpg
“We are not here just to talk about new technologies” underlied Antonella “but primarily to share information, to understand how the world changes around us and imagine together how we can facilitate this change thanks to the tools provided by Google”.
“To accomplish this” continued Antonella “we begin with three different speakers linked by two elements in common:

  • the passion and excellence in their own professional activity,
  • the need to adopt innovative technologies to do better quality-work”.

Serena Zonca: Virtual Storytelling

serena-zonca.jpeg

The first speaker was Serena, editor, journalist, editor and founder of www.autopubblicarsi.it .
She conducted an elegant and passionate presentation on virtual reality (VR) and in conclusion proposed to engage all of us in a collective project for the realization of an “immersive” electronic book that could allow the reader to enjoy and co-participate to the narrative using, for example, a VR platform already available and chipper: Oculus, Kinect and date-glove.

Massimo Chiriatti: Work Games

massimochiriatti.png

Massimo is a technologist, blogger and contributor for “Il Sole 24 Ore” the most important Italian business newspaper.
He gave a ironic and nice presentation (here in pdf, 14 slides, 790 K) on the real “world of work”  illustrating risks & opportunities arising from new technologies.
According to Massimo we need to abandon the idea of working all our work-life with a single employer: the average life of the big US companies is decreasing while simultaneously human life expectancy, also working, is increasing.
The present (and the future) of work requires adopting a nomadic attitude that is the capability of rapidly moving between industries keeping in mind that the competition between those who aspire to be hired is based both on the specialization of his own skills and on his ability to “communicate” and sell his expertise especially through the new digital media.

Darius Arya: let’s develop immersive multimedia experiences about history

darius-arya.jpg

Darius is a (multi-platform) archaeologist, TV documentarian and social media influencer.
His talk focused on how to tell Rome’s history through social and digital media like YouTube, Instagram, Twitter, Facebook and so on.
He presented a couple of videos: “Ancient Rome Live” (Youtube) and “Save Rome: Preserving the Eternal City in the 21st Century” (YouTube), to explain the idea that we can better preserve our common history and heritage through new media outlets.
According to Darius YouTube offers a great venue combined with other learning platforms and applications.

Sergio Paolini: the Web and neurosurgery

sergio-paolini-2.jpg

Sergio, who is a neurosurgeon, gave a brilliant presentation on how the web has radically altered the field of surgery in the last 10 years both in the doctor-patient interaction and for training and also the clinical practice of doctors.
Sergio also highlighted the limits and delays of Italy in this field especially as concerns the public administration.
According to Sergio new technologies can play a key role in this field: the entry of the Internet in the operating room is producing significant improvements in medical practice by sharing information between operators; further progress can be expected thanks to the so-called wearable computing like Google Glass.

Conclusions

antonella-speaking-2.jpg

The first part of GDG Rome DevFest 2014 ended with some quick conclusions by Antonella Blasetti:

  • we are here to establish a real community where everyone can contribute to;
  • this is a new world so we have to think in a new way: technology solutions must be built “from below” and realized by joint actions;
  • often the “users” of the new technologies are ahead in terms of knowledge compared to the technicians as demonstrated by the present speakers;
  • different skills enrich the solutions.

Concluding, Antonella proposed to work on the projects launched during the speeches: virtual storytelling, integrated communication, web-enriched cultural heritage; the idea is to merge all of these exciting proposals to realize for example a new rich experience web site following the model that Google already produced for the city of Marseille.

Rome DevFest 2014 code-labs

tipi-google-2.jpg

Connected posts

AngularJS and the future of web programming: a talk with Vittorio Conte (February, 4th 2015)

angularjsintro2.jpg

(This post is part of a serie on GDG Rome DevFest 2014)

Agatino Grillo: Hi Vittorio. Could you introduce yourself?

Vittorio Conte: I am a software developer at Engineering Ingegneria Informatica SPA a multinational company leader in Italy in software and IT services. I live in Rome and I love travels/girlfriend/family/dogs.

Agatino Grillo: What is AngularJS?

Vittorio Conte: AngularJS is a JavaScript framework, designed to simplify the web developer’s experience. It provides a very structured approach and several feature to easily and quickly develop web sites and applications.
Created by Google developers, today is an open source project powered by Google and with hundreds of open source contributors around the world.

Agatino Grillo: The web programming scene is every day more crowded: why Google proposed another framework?

Vittorio Conte: There are a lot of alternative but most of these frameworks use an imperative approach for DOM manipulation. Through their usage the code maintenance and organization is not so simple. AngularJS, instead, is based on MVC pattern and gives you a mental model for “where to put what”. It’s provides a declarative approach to extend the HTML functionalities, improving code reusability and maintenance.
Developing a single-page application is, than, fast and easy.

Agatino Grillo: What is a single-page application?

Vittorio Conte: A single-page application (SPA), is a web application or web site that fits on a single web page with the goal of providing a more fluid user experience akin to a desktop application. In a SPA, either all necessary code – HTML, JavaScript, and CSS – is retrieved with a single page load or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the process, nor does control transfer to another page, although modern web technologies (such as those included in HTML5) can provide the perception and navigability of separate logical pages in the application.

Agatino Grillo: What about data binding?

Vittorio Conte: In traditional web frameworks the controller combines data from models and mashes templates to deliver a view to the user. This combination reflect the state of the model at the time of the view rendering.
AngularJS takes a different approach. Instead of merging data into a template and replacing a DOM element, AngularJS creates live templates as a view. Individual components of the views are dynamically interpolated live. This feature is one of the most important in AngularJS and allows us to write less JavaScript Code.

locandina2-devfest-2.png

Agatino Grillo: At Rome Google Developer Group (GDG) Fest on 8th November 2014 your code-lab was dedicated to getting started with AngularjJS. What did you propose?

Vittorio Conte: GDG DevFest’s codelabs are hands-on lab sessions to allow attendees to learn Google technologies by practical examples from experts sharing their knowledge and passion. You can find slides and code of my AngularJS here. I proposed a rapid overview of AngulaJS features and how you can have a simple Angular apps in minutes thanks to its features like 2-way binding, directives, dependency injection, etcetera. I would thank my friend Alessandro Relati who get the codelab with me.

Agatino Grillo: Google already announced AngularJS 2.0 release?

Vittorio Conte: The next version of AngularJS is currently in a design and prototype phase. As per Google, AngularJS 2.0 will focus on mobile apps but desktop architecture will be supported too.

Agatino Grillo: Thanks Vittorio

Vittorio Conte: Thanks to you

VittorioConte31.jpg

How to contact Vittorio Conte

Slides and code

Links

Connected posts

Pierluigi Marciano: state of Drupal 8 and Italian code sprint 2015 (January, 26th 2015)

(This post is part of a serie on Drupal 8 code sprint 2015)

pierluigi-marciano-4.jpg

Italian Drupal8 codesprint was hosted by Wellnet, a Drupal-oriented firm. A conversation with Pierluigi Marciano CEO of Wellnet and drupalist

Agatino Grillo: Hello Pierluigi. Can you introduce yourself and Wellnet?

Pierluigi Marciano: I am founder and managing director at Wellnet  an Italian ICT solutions provider established in 2005 and serving Milan and Rome area. I was born in Naples where I gained a BA in Economics with specialization in Management of service industries. I am passionate about web, open source, process optimization and innovative human resources management. In my free time I love travelling, cooking, old Vespas and jazz music.

Agatino Grillo: What about Wellnet?

Pierluigi Marciano: Wellnet is an ICT consulting company that since 2005 has been providing expertise to companies and institutions aimed at the development of web applications and digital strategy. We have been working with Drupal since 2006 and we use this platform as a Content Management Framework that help us to build customized features and layouts. We supply Drupal administration and development training to companies and institutions and we like to organize, participate and contribute to Open Source movement, research and events. We also have a J2EE development Department specialized in Business Intelligence and Process Engineering for the financial market. Wellnet is an Acquia partner and Google partner too.

Agatino Grillo: Why did you choose Drupal as your Content Management Framework?

Pierluigi Marciano: Drupal is the leading open source content management system for developing sophisticated, flexible and robust websites, social media networks and applications. Drupal is a mature and reliable product with a huge user base and is used by a lot of organizations like ONU, Google, Yahoo, AOL, Sony, MTV, Nike, Amnesty International, Greenpeace, FedEx, and thousands others. Drupal is designed for rapid deployment and enables you to work in a true “Web 2.0” style in which core features and functionality can be rapidly deployed to market. Last but not least Drupal is Open Source with a large community estimated in more than 1 million members and 32.000 developers over the world.

wellnet-leader-consulenza-drupal

Agatino Grillo: On 17th January 2015 Wellnet hosted Italian Drupal code sprint. What is a code sprint?

Pierluigi Marciano: A code sprint is a meeting of Drupal developers and users to help get Drupal 8 released by patching code and resolve public issues on Drupal source. Sprint is also a key step in Drupal-community building by pairing both new and experienced contributors  together for mentorship. Although a sprint is not a learning-event however it is perfect for tackling programming issues and improve Drupal know-how: during the Sprint, people work together, have one-off discussions, break out into groups, comment on issues or IRC chats. Attending to sprint, Community can contribute to advance Drupal improvement by collaboration and contribution.

150127-drupal-infographics.jpg

(click to enlarge)

Agatino Grillo: What is the state of Drupal?

Pierluigi Marciano: First of all Drupal in not just a Content Management System (CMS); it is a framework for web content management with an embedded development platform. So you can use Drupal no simply as a tool for creating web pages but also as a platform to support the digital strategy of your company or agency.

Agatino Grillo: In which sense?

Pierluigi Marciano: Using Drupal you can easy express your digital business by creating your web and mobile presence. You can focus on your business ideas and contents and leave to Drupal how to realize them in a rapid, at state-of-art and economic way. Drupl 8 new features have been realized to be enterprise-oriented: native web service support, much improved multilingual support, configuration management system, streamlined content editing, in-place editing, responsive design, HTML5 support, a built-in WYSIWYG editor and more … The list of improvements is long but above all the new Drupal version is going on a Symfony based architecture, the is the biggest new!

drupal8.jpg

Agatino Grillo: When will Drupal 8 release?

Pierluigi Marciano: There is no Drupal 8 official release date yet. As you can see on official release cycle for Drupal8 page on October 1, 2014 first Drupal 8 beta was released. Today the current beta version is the fifth released on January 28, 2015.
The first Drupal 8 release candidate (RC) will be available when once the number of critical bugs and tasks will be reduced to zero. Maintainers will also evaluate the major bugs and tasks before creating the first release candidate.

Agatino Grillo: Last question. How is the community Drupal scene in Italy compared European or American ones?

Pierluigi Marciano: Italian Drupal community is still one step behind other countries. I feel this gap might come from the limited number of Italian Drupal certified-companies and the few number of Drupal events and meetups hosted in Italy but I hope Italian Drupal community will grow rapidly in the next months. We do our part organizing every year the DrupalDay.

Agatino Grillo: Thanks Pierluigi

Pierluigi Marciano: Thanks to you

pierluigi-marciano.jpg

How to contact Pierluigi Marciano

logo_wellnet.jpg

How to contact WellNet

Links

Tags: 

Rome GDGFest 2014: how replicate classic arcade “Space Invaders” in Unity3d, by Vincenzo Favara (12th January 2015)

(This post is part of a serie on GDG Rome DevFest 2014)

150112-vincenzo-favara2.jpg

(Claudio d’Angelis and Giovanni Laquidara)

Agatino Grillo: Hi Vincenzo. Can you introduce yourself?

Vincenzo Favara: Analyst developer, I’m a true computer geek with an open mind. I compose poems for my pleasure and that of my friends. I’m very talkative, always learning about new technologies and new ways of thinking. Member of Google Developers Group - Lazio/Abruzzo (GDG Lab).

Agatino Grillo: Motto?

Vincenzo Favara: “The impossible is the first step towards possible”

Agatino Grillo: You get a talk at Rome Google Developer Group (GDG) Fest on 8th November 2014 in a code-lab dedicated to Unity3d. What about it?

Vincenzo Favara: Unity3d is a game development ecosystem: a powerful rendering engine fully integrated with a complete set of intuitive tools and rapid workflows to create interactive 3D and 2D content. It includes a rendering and physics engine, a scripting interface to program interactive content, a content exporter for many platforms (desktop, web, mobile) and a growing knowledge sharing community.

150112-unity3d.png

Agatino Grillo: Why a codelab about Unity3d in a Google Developer Group Fest?  Unity3d doesn’t belong to Google universe …

Vincenzo Favara: Google Developer Groups are a major initiative for Google but each GDG is an independent group so we can contaminate Big G technologies with other topics in our conferences. But of course you can using Unity3d to develop a game for Android. Unity3D has devoted more time to prepare and to develop apps on the Android platform.

Agatino Grillo: What topics did you talk about in your codelab?

Vincenzo Favara: The goal of my codelab was to teach how quickly implement a 2D game in Unit3d showing how replicate classic arcade “Space Invaders”, just a simple example for beginners. Slides are available here, code in GitHub.

150112-Space_Invaders_flyer.jpg

Agatino Grillo: Advantages of using Unity3d in game development?

Vincenzo Favara: The main advantage is that Unity3d offers a rich, fully integrated development engine for the creation of interactive 2D and 3D content. The second point is that using Unity3d you can publish your game on several different platforms programming in Java Script, C # or Boo. Finally Unity3d has a large asset store where you can buy scripts, tools and textures to use in the game.

Agatino Grillo: Computer games are rapidly evolving in their sophistication and it is now possible use their potential to develop inexpensive, immersive and realistic media experiences. What is your opinion on this matter?

Vincenzo Favara: Video games are a primary component of digital interactive media industry and a form of digital art. I believe video games are an exciting opportunity and instrument to realize innovative experiences of immersive and interactive media. Recently Unity3d announced a full free integration for Oculus  a virtual reality platform. You can use Unity 4.6 and the Oculus integration package to deploy any sort of virtual reality content imaginable to the Oculus Rift, a VR head-mounted display.

Agatino Grillo: Thanks Vincenzo

Vincenzo Favara: Thanks to you

How to contact Vincenzo Favara

Codelab: code and slides

Links

Connected posts

Pages