Scroll to Info & Navigation

Getting Your News From Social

Recently, the Pew Research Center’s Journalism Project released a study that concluded that news consumption on Facebook was common, but incidental to users. Today, Pew released findings from the same study regarding Twitter users consumption of news. As part of our Authority Report, tracks the incoming referral traffic to over five billion pageviews for news websites. So we thought we’d take a look at the data from our side.

Below we selected a few findings from the report and took a look from the publisher side.

Pew Research Center Journalism Project

Pew Finding: Facebook news consumers still access other platforms for news to roughly the same degree as the population overall. Site Analysis:

Here are the quick and dirty facts:

  • Facebook is the top referral source for 10% of our news websites, meaning that these sites see the most external traffic from Facebook

  • Facebook is the number two referral site overall, with the number one being Google and Google properties (including search). Google has a commanding lead however, as 66% of our sites have Google sending the most external traffic

  • Other sites that top the list are Pinterest, Yahoo, and StumbleUpon, as well as fellow news sites, including the Huffington Post and Drudge Report.  

See the full list of referral sources to our network of news websites by downloading our Authority Report.

Pew Finding:  Facebook news consumers who “like” or follow news organizations or journalists show high levels of news engagement on the site. Site Analysis:  We looked to see if there was any correlation between the number of Facebook likes and the referral traffic overall. For this, we created a metric called “Facebook Likes Ratio” (FLR) which compared Facebook likes to a site’s article traffic. Sites with Facebook as the largest referral source had an average FLR of 42%. By comparison, a sample of similar sites (based on audience size and topics) that did not have Facebook as their main referral source had an average FLR of 7%.

Did we make up a metric to determine this? Yes, but could some publishers possibly use that metric to help grow audiences and traffic? Certainly.

Pew Finding:  Roughly half, 49%, of Facebook news consumers report regularly getting news on six or more different topics. The most popular topic is entertainment news, which 73% of Facebook news consumers get regularly on the site. Close behind is news about events in one’s own community (65%). National politics and government rank fourth, reaching 55% of these consumers regularly, just behind sports, which reaches 57% regularly. Site Analysis: None of the sites with Facebook as the largest referrer are in the enternainment vertical. However, half were in the general news and information sector, which includes local and national entertainment, community, government, politics and sports. Perhaps within these sites, specific sections are getting more social referrers? Editors can use to determine the answer to exactly this sort of question, and create a social media strategy around the results.

Pew Finding: Nearly one-in-ten U.S. adults (8%) get news through Twitter Site Analysis: Twitter is not the top source of referral traffic for any site in our network. This seems in keeping with the 8% finding, though there are interesting points specific to the Twitter users from the Pew study: “Twitter news consumers stand out for being younger and more educated than both the population overall and Facebook news consumers.” The study also found that Twitter was used more for breaking news, something that could be tracked with our real-time feature, Pulse, or through our API.

Read more about the Pew findings on their website

Find out more about how can give you this kind of insight into your audience by signing up for a free one month trial:

Previous Post: “Data, Journalists, and Learning to Code for the Newsroom

Data, Journalists, and Learning to Code for the Newsroom

Recently the topic of coding for journalists has been a subject of debate, from The Atlantic’s “Should Journalism Schools Require Reporters to ‘Learn Code’? No” to Noah Veltman’s “On journalism and learning to code (again)”.

PandoDaily even crafted this handy flowchart for journalists:


(Source: Flowchart: Should journalists learn to code? PandoDaily)’s core mission is making data consumable and actionable for news editors, so we understand this challenge. How can newsrooms use more data and coding to create better journalism?

Derek Willis, an interactive reporter for The New York Times wrote about the issue saying,

“News organizations are inefficient, often far too cavalier about the raw materials that provide their lifeblood and can seem to lurch from story to story without a whole lot of reason. Many journalists have a bizarre fear of math and computers, and guaranteed a secret ballot, a decent portion might opt for a return to the days of typewriters and afternoon editions.”

Having the skills to analyze and present information through code doesn’t subvert the traditional reporting system—it adds a richer layer to the level of reporting. This interactive breakdown of the elite attendees of Davos this year and this time-lapse map of the Citi Bike bike-share program in New York transform what would be unintelligible numbers into content that’s fit for consumption by the general public.

And so, web and print journalists (who did not land on “I only want to write literary reportage for the New Yorker,”) can benefit from learning to code, though the phrase itself has multiple meanings, from front end visualization to back end data scraping. 

We’ve pulled together some digital media experts to share their experiences and insights on journo-developing. We spoke to David Yanofsky (Reporter at Quartz), Emily Chow (Graphics Editor of The Washington Post), and Zach Sims (CEO of Codecademy) about their experience as reporters and how journalists can learn to code in order to enhance their reporting.

Why did you start coding as a reporter?

EC: “I started learning the sophomore year of college. I had official exposure through nuAsian, this really amazing magazine that I worked for at Northwestern. Going into j-school and thinking about what I wanted to do when I was 18, I never thought of interactive graphics or interactive journalism. It was a niche that was worth pursuing, and it was kind of different from what I focused on in high school, which was photography and basic layout design. I don’t think I realized that anything else was being done. I managed to use my spare time to learn how to code.”

DY: “I got a BFA in Communication Design at Washington University in St. Louis, but I didn’t learn any coding there. With my design experience, I started with a project I wanted to achieve, and from there, I learned a lot of Javascript and HTML5 for data visualization. I was a multimedia designer at Bloomberg, where I taught myself to code, and from there, I moved onto Quartz.”

What is the biggest obstacle—mental, logistical, or otherwise—for journalists who want to start coding?

ZS: “[CodeAcademy thinks] getting started is often the most difficult part—overcoming the perception that programming is impossible is a great place to start. You don’t need a formal math or science background to be good at programming, but oftentimes you do need lots of persistence.”

DY: “I think the hardest part is knowing where to start. There’s a lot of obstacles involved; for example, you need to have a clear idea of what you’re doing and what you’re accomplishing. When you’re attempting a project, you need to know why and how everything works, instead of just coding in abstract theory.”

EC: “A lot of the journo-developers out there are actually self-taught. And I think, from what I gather, self-taught is just finding a project and then, just trying and failing a lot… until you really take a crack at it, you don’t really understand it.”

Do you think programming is a “new form of literacy” in the age of the web?

EC: “I think the term ‘data’ has become trendy in the way that I look at it. From my understanding, computation-powered reporting has existed before. There has always been data, after all. Now, the idea of interactive graphics—those are the things that have become especially interesting to readers.”

ZS: “Definitely. Programming is intimately involved in almost everything we do and it’s hard to go a day without touching a program someone else built. It makes intuitive sense to understand how to use those technologies.”

What resources do you recommend for journalists starting to learn to code?

ZS: Codecademy: “A decent number of our users do use Codecademy for pre-professional development. We’ve got a few success stories that highlight uses like this. Also, we work with a lot of institutions to help design programs for them.”

EC: Stack Overflow: “There are probably at least 1,000 other people out there that have probably come across the same problem you have, and a huge community of coders of all skill levels that offer solutions or a better understanding of why your code doesn’t work. It’s just helped lead me in the right direction or to the right documentation, many times.”

DY: Hackathons: “I didn’t learn to code in school; I studied design. I did a hackathon at the Columbia University Graduate School of Journalism, which was very helpful because it allowed me to focus on a project and objective.”

DY: ScraperWiki: “It’s an online tool that allows you to code from information that you run from somewhere on the internet.”

(Post was compiled with help from Noel Duan, Summer ‘13 Business Intern.) Insights from #ONA13

Last week CEO Sachin Kamdar and VP of Sales and Marketing, John Levitt, joined 1,500 journalists and journalism experts at the Online News Association annual conference in Atlanta, GA.

Parsely at proudly sponsored the registration area, but we hoped to leave conference attendees with more than a sales pitch. Like our product, we value actionable insights. After three days of sessions, conversations, drinks, and meetings, what can you bring back and put into action in your newsroom?

Best session?

For obvious reasons, the team vote went to “Analytics in the Newsroom: What’s Next?” The panel included moderator Dana Chinn, a Lecturer at USC’s Annenberg School for Communication & Journalism, and speakers Todd Cunningham, Director at Media Impact Project; James G. Robinson, Director of News Analytics at The New York Times; and Daniel Sieberg, Head of Media Outreach at Google.

The moderator and audience asked the panelists questions that we consider in our day-to-day product decisions at How can we shape the discussion around data for creative types, like editors and reporters, so that they understand how the numbers can help them? What should newsrooms be measuring for the most positive impact on their audience and business? 

You can listen to the full session here:

But the conversation around analytics for the newsroom wasn’t limited to #analytics4news. Kamdar described the mood at the event, “The healthy evolution of digital media was in full-force at ONA. Throughout the conference discussions circled around how the next generation of media professionals will need to expand their skill sets into programming, data visualization, statistics, and analytics. The next two years should be extremely exciting as these new skills are put to use.”

Favorite #ONA13 Tweets

As experienced conference goers can attest, attendees discuss some of the most interesting topics outside of conference sessions. More and more, Twitter captures that off-the-floor conversation. We’ve curated some of the tweets that allowed our team back home a glimpse into the conversations.  

And the Winner is… Our Clients!  

Finally, we’d like to say congratulations to our clients that were nominated for ONA awards, including:

Like insights from the team? The Authority Report subscription is FREE for all journalists who intend to use the data in articles and for all employees of media companies who may seek to formulate content strategy based on the data. Sign up here!

Just How Soon Will Google’s Dark Search Take Over?

Last month, we combined the two things we love the most at analytics and digital media, to create our first Authority Report. We collect data on billions of pageviews across hundreds of top-tier news sites. Just as individual newsroom editors can benefit from content and post analytics, so too can the entire digital media industry benefit from our aggregate trends.

Each month, the Authority Report reviews traffic sources to our network so that publishers can adjust their content strategies accordingly. Our most recent report covers trends in search, social, RSS readers, and aggregators. Take a look at the charts by subscribing to the report here, or take a look at the highlights below.

Currently, a whopping 46% of total referrer traffic is search. The top five search engines are: Google sites (defined as an aggregate of all Google-owned properties), Yahoo, Bing, Ask, and DuckDuckGo. Google sites continue to dominate – in fact it dominates every other referrer as well.

The internet has been abuzz with losing data on Google search keywords. We’ll leave it to the SEO experts to tell you how to deal with this. What we can show you is how soon you’ll have to deal with it (hint: very, very, soon).


In just three months, hidden search terms (aka “not provided” keywords) grew by 70% and are on track to completely phase out by November, 2013.

"Dark search" isn’t the only kind of dark traffic, though. As content publishers dive deeper into their audience, they’re also discovering errors in assumptions from traditional analytics tools like Google Analytics and Adobe SiteCatalyst / Omniture. You can also listen to CTO Andrew Montalenti dive into this topic further on this podcast.

The Authority Report subscription is FREE for all journalists who intend to use the data in articles and for all employees of media companies who may seek to formulate content strategy based on the data. Sign up here! Investors Back the New Content Performance Authority


By: Sachin Kamdar, CEO recently closed a $5M Series A financing round led by Grotech Ventures.  You may have read about it today, but I want to share some more details about why this is important for the media industry. Our mission is to unlock the power of content through data and analytics. We’ve proven that accessible analytics with serious backend technology can be extremely valuable for a wide variety of content sites over the last couple years. Now, we have the resources to take the next step in empowering media.

We’ve thought extremely hard about where to invest our resources. Though we’ve spent the past year focused on adding new features to the product (Pulse, Glimpse, advanced reporting and more), several of our engineers have been hard at work spending their efforts on an entirely new way to evaluate content performance. This technology will finally get content metrics right, and allow any content site to understand how to effectively influence their success. We’re still putting the finishing touches on this, but stay-tuned as we’ll release more details over the next several months.  The best part, the focus is still on content — we’re continuing to dedicate all of our energy to the media industry.  

One of the ways that we’re supporting the media industry is by sharing aggregated trends we see across the network of publishers. This inspired the first ever ‘Authority Report’  that gives you greater insight into what readers really want, where they discover content and how they are sharing it. We’re going to release this report monthly, and you can expect deeper dives into specific areas like search, social and aggregators. If you have suggestions about what you’d like to learn about here, please contact us at

What does this mean for our customers and the online publishing industry now? It means that your life is only going to get better. We’ll deliver product features attuned to your needs, provide superior customer support, and give back to the media and developer communities. You’ll also be seeing us more around the industry: for example, we’re attending and sponsoring the upcoming Online News Association conference in October (Atlanta GA) where we expect to learn from journalists and publishers of all kinds.

We’re more excited than ever about the the future of Our success is our customers’ success. Without your incredible insight we’d be building our technology in a vacuum. The feedback you’ve given us over the past two years drove our roadmap, which, in turn, drove our success.  Thank you and get ready for an exciting year to come!

Feel free to contact me at 917.934.3393 or if you have thoughts, feedback or questions.


Sachin Kamdar


Additional Coverage of the Authority Report

Feedly dominating the post-Reader world, and other web-publishing insights from

A month after Google Reader vanishes, Feedly ranks as the top RSS traffic referrer

Report: Other publishers are third biggest traffic driver to news sites

Facebook Driving More Than 2x Twitter Traffic To’s News Clients

Google sends three-times the traffic to news site than Facebook does

Additional Coverage of our Series A Financing Raises $5M For Predictive Analytics Platform That Helps Media Companies Decide What To Publish nabs $5 million to keep you clicking on Grumpy Cat (updated) Raises $5 Million, Invests In Core Tech, Data Raises $5M From Blumberg, Grotech, ff Venture, FundersClub

Grotech leads $5 million round in analytics startup


Ratio Metrics: How Atlantic Media Measures Article Performance

This is a guest post by Adam Felder, Associate Director of Digital Analytics at Atlantic Media. It is the first post in the blog series, “Analytics Innovation for Online Content”, in which journalists and analysts use our data to improve their understanding of online content.

Frequently in analytics, one has a tendency to only look at the stories that were wildly successful. This is a rather myopic view of success. Internally, we refer to the “80/20” rule: that the top 20 percent of our stories drive about 80 percent of our traffic. The actual ratios are closer to parity, but “80/20” makes for an easy-to-memorize catchphrase.

Very often, however, there are not terribly useful lessons to learn from the upper 20% of traffic—they catch the perfect storm of social sharing, meme riding, a breaking story within the larger news cycle, etc. The success of these stories is not due to any specific repeatable step or steps.

Large online publishers like publish diverse content on a variety of topics at a high velocity. I began to wonder: could we use all this data to get a clearer picture of our content’s performance?

Read more

Zipf’s Law of the Internet: Explaining Online Behavior

In 1949, the linguist George Kingsley Zipf noticed that given a natural language corpus, words are distributed such that the frequency of any word is inversely proportional to its rank in the frequency table. This means that the most frequent word occurs twice as often as the second most frequent word, three times as often as the third most frequent word, etc. He then showed that the relationship also applies to populations in cities of a given country.

An article in The Quarterly Journal of Economics prefaced Zipf’s law for cities as “one of the most conspicuous empirical facts in economics, or in the social science generally.” Social scientists have tried to understand why such a simple relationship holds. Zipf’s Law appears entirely natural. Steven Strogazt wrote in The New York Times, “No city planner imposed it, and no citizens conspired to make it happen. Something is enforcing this invisible law, but we’re still in the dark about what that something might be.”

Zipf’s Law is expressed mathematically as

log R = a - b log n

where R is the rank of the datum, n its value, and a and b are constants. 

Data conforms to Zipf’s law when the log-log plot is linear (b is a constant). When this regression is applied to cities, the best fit has been found with b = 1.07.

At, our data shows that the same simple mathematical formulation can be used to explain behavior online. Every day, millions of urls are viewed within the content network; these web pages are published by Mashable, The Atlantic, Gawker, and more than 100 other publishers. A glance at the data on a randomly selected date (March 10, 2013) shows the majority of pageviews in the content network are referred by a small subset of domains on the internet.


These plots show the ecosystem of domains that referred 75%, 95% and 99% of network traffic on March 10, 2013. Each circle represents a domain and its area is determined by the number of pageviews it referred to the network. is the lavender body that we like to refer to as the brightest cluster galaxy (BCG) and Facebook is the wine-colored body that is (randomly) superimposed.




To determine whether the number of network referrals fits a Zipf distribution, we need to measure whether log(pageviews referred) and log(domain rank) are linearly related. A Pearson correlation coefficient equal to (-)1 signifies that two variables have a perfectly linear relationship.

On a log-log scale, we plotted the number of pageviews that each of 50,929 domains referred to the network as a function of how the domain ranked in its number of referrals and found that the Pearson correlation coefficient is -0.988. Inbound traffic to the content network follows a pattern that can be quantified by Zipf’s Law.


A year after Strogazt marveled at the naturally occurring phenomenon of Zipf’s Law of Cities in The New York Times, Edward L. Glaeser, an economics professor at Harvard, posed a hypothesis: “My own view is that Zipf’s Law is really about the operation of agglomeration—the attraction of people to more people.”

We think agglomeration of informationthe attraction of people to the richest agglomerations of informationis a starting point to explain Zipf’s Law of the Internet. Inspection of the domains that rank in the top ten on this randomly chosen date and the number of users/voices on each confirms it.

—Emily Chen, Engineering Intern

Feedback Loop: How I Improved My Day-to-Day Productivity with Data

Data is enlightening, mesmerizing, and reflective. With the advent of technology, data points are much easier to harvest now. The trick in this day and age is to figure out the answers to be read from it. It is such a problem that Nate Silver’s The Signal and the Noise book touches on. This post is how I have used data to improve my workflow with two of my primary tools, git and email.


When I started at, my experience with git was very limited in that I knew a few commands and had only learned it in my free time to fulfill a specific use case. As such, I developed bad habits but did not see them until I found a tool called git-extras.

In my limited experience, I had read some best practices with git. One of which was to commit early, and to commit often. The key idea being that without commits, you lose the advantages gained by using version control. Nowhere did I read about cleaning up commits, though. This resulted in having many tiny commits that have at its center, one change. Easily revertible and thus, I felt useful.

The result of this, is as shown with git summary:

±git summary

project : lab

repo age : 3 years, 6 months ago

commits : 1413

active : 243 days

files : 1774

authors :

284 Raymond Tang 20.1%

183 Martin Laprise 13.0%

162 Josh Click 11.5%

108 Andrew Montalenti 7.6%

106 Gabriel Barth-Maron 7.5%

98 Jenna Zeigen 6.9%

68 Dominic Rocco 4.8%

58 Mike Sukmanowsky 4.1%

58 Sam Wagner 4.1%

55 Vincent Driessen 3.9%

47 Emily Chen 3.3%

45 Zach Cimafonte 3.2%

42 dfdeshom 3.0%

41 Keith Bourgoin 2.9%

28 Matt Krukowski 2.0%

9 Charles Zhang 0.6%

7 Vihang 0.5%

4 Emmett Butler 0.3%

3 Miya Schneider 0.2%

3 talentless 0.2%

1 Jenna 0.1%

1 drocco 0.1%

1 emmett9001 0.1%

1 fastturtle 0.1%

This repository is used for intern projects and isolated code experiments, and in ~21% of the repository’s age, I had committed much more than anyone else on the team, including those who have been with the team since the beginning. I had spammed the version history to the point where it was significantly less useful as a tool.

It was then I realized I should look at how the other engineers managed to do much more with fewer commits. Originally had I thought they wrote and commit things only after they were perfect, avoiding version control, but it became apparent in that moment that, no, I had the bad habits and I needed to understand how to clean up my own commits. To that end, I learned about rebase, the details, and the secrets.


For about a year now I have been using a tool called Gmail Meter to analyze my email and describe trends from it. Some key ones popped out immediately and made me question what approach I should take with my email. For instance, I took quite a bit of time to respond to emails.


But in return, I wrote much more than I had to, being as detailed and composing well thought out emails.


After looking at the data, and through my own interactions, I realized many would prefer more immediate responses, with the ability to follow up later on if something was not clear. This both decreased the time I spent with email, as well as allowed me to work through them much quicker, as illustrated in the following graphs whose datapoints come almost six months after the original shown above.


With more than 75% of my emails answered in less than 10 words, and a little more than half answered in under an hour, I would say I definitely improved my response times.

This data made for changes in the way I interacted with Git and Email, and today, data makes for large improvements in life, and even in the smallest parts of it. So how has data improved your life?

—Raymond Tang, Engineering Intern

Intern Blog: What I’ve Learned at

I’m a junior at NYU, where I’m studying computer science. I have a short attention span—which is why I’ve decided to write this in listicle form for those of you who are also easily distracted. I code until I barely wake up in time for work and I climb rocks. As you can tell, I am a man of few words—I’d like to credit Python for teaching me about conciseness.

1) Don’t be afraid to learn—and make mistakes in front of everyone else.


When you’re a one-man team or the only programmer in a group, you tend to have your own way with things. As the team gets bigger, other people will check out your work, and will very likely correct the mistakes you make. In the past, I’d get hot-headed because I just didn’t want to be wrong. Now, I embrace making mistakes—as long as I learn from them.

2) Be open to change. (Even if it breaks your heart.)


It’s critical to maintain an open mind, especially when listening to those who have more experience. I was once a PHP fanatic. When I first heard of Ruby, I refused to learn it. After all, PHP was more established and used by all the big companies. In the back of my head, though, I knew that I was just reluctant to adapt because I refused to believe that other technologies were better. Now, I’m quite familiar with Ruby, as well as Python—the language of choice here at I’ll finally admit, my elders were right—both of these languages are eons ahead of PHP.

3) Your code should be so easy enough for a five year old to read and understand.


Diving into new codebases has always intimidated me, but it’s necessary to learn the style and design that more experienced coders stick to. I realized that a lot of code out there on the web is pretty terribly written, particularly in terms of readability/learnability. If the objective is to make code harder to copy, then those developer deserves a trophy. In my own code, I’ve tried to avoid inflicting that pain on others, especially on my co-workers.

4) Working with people keeps you sane—or, at least, allows you to embrace your insanity with others.


A huge benefit of working at is the ability to work remotely—a benefit I take advantage of all too often. It’s just too easy to code uninterrupted for hours and hours at a time; it’s something I’ve gotten used to after spending many nights in college hacking away at personal projects. A few days of this, though, makes me extremely restless, and the comfort of my room feels more like solitary confinement. Working around people changes the whole game! Their presence keeps my spirits up.

5) Keep your projects organized to keep your mind organized, too.


When I’m hacking away on a personal project, keeping organized isn’t a big priority. I’ll usually just keep a mental list of things I’m currently working on. However, on a big team, organization is crucial. Without a well-organized project, people will clash and development will slow to a snail’s pace. We must communicate incredibly well—especially in a distributed environment—and act as a well-oiled machine.

6) You have a lot more fun when you do something meaningful—whatever “meaningful” means to you.

image taught me that you can really enjoy working while still being very productive. This point in particular seems to be a common thread between many startups, and will probably be the thread that continues to pull me back into startups.

—Josh Qian, Engineering Intern

Intern Blog: Applying Programming Concepts to Everyday Life

Over the past few months, I have been slowly learning Python. I wanted to learn programming as a way of expanding my knowledge and becoming familiar with software engineering. While I do have more of a business and analysis background, I still find it important, especially in this day and age, to know a little bit of programming. Python has proven to be beneficial in data analysis and data mining. Through Python, I have also developed a more logical and efficient way of problem solving. Rather than just getting the answer, I constantly try to make the process more efficient. Applying this mentality to everyday life makes me approach different tasks from different angles. As I progress through my career and improve my skills in all areas, I will be tasked to use higher-level tools to complete extensive analyses. These tools require simple-to-intermediate understandings of constructing proper syntax, and Python has really helped me absorb those skills. Over the past few months, I have been using two different methods to teach myself.

I first signed up for Codecademy about six months ago, but didn’t stay consistent with it until about a month ago. While Codecademy was a great introduction to Python programming, it was lacking in a few ways. I wasn’t learning theory, the information was hard to retain, and the hints gave too much help. Instead of quitting on Codecademy, I signed up for the Intro to Computer Science class on Using a different method, Udacity is able to “teach” through videos with a real instructor and in-depth explanations of computer science theories. The frequent quizzes also assist in recalling important concepts from earlier lessons.

What I soon came to realize is that I have been using these same programming concepts for years now. The other day at my internship, I was tasked to complete some vlookup formulas in a few Excel spreadsheets. Simply put, a vlookup formula is just like a function which finds and references another value and then outputs it. While the formatting is different and much simpler than something someone would see in Python, the overall concept is extremely similar. Just like improper syntax produces an error in Python, the same occurs with these Excel formulas. Another formula that comes to mind is the SUM(IF) formula in Excel. This can take advantage of a nested “if” statement with either a Boolean or array and calculate sums. Each individual cell of a spreadsheet also acts as a little interpreter waiting to take inputted data and transform it somehow. These formulas, along with several others, in Excel make life easier by making quick calculations. Likewise, Python uses “if” statements, Booleans, and arrays.


After coming to this realization, Python has become much easier to understand conceptually.Things like Loops have meaning behind them and are more useful than ever. The syntax as well, reads like a book in my mind, instead of a bunch of random words separated by colons, quotations, and numbers. No longer do I fear its complexity but rather I am able to compare it to functions I am already familiar with. It allows me to truly appreciate how many awesome things I can do with it. While I am still only a beginner in Python, I believe this special understanding will help guide me through further lessons. Finding a way to compare two seemingly dissimilar things can help in all aspects of life—I’m just starting with my discoveries in Python and Excel, after all.

—Afzal Jasani, Business and Marketing Intern