Archive | ECommerce Applications RSS feed for this section

A Dare from Bill – and a Table Topic! #CareToShare

14 Jun
Advertisements
Image

“Vanidad – de Vanidades” (at 4GLTE Speeds)

12 Jun

image

Inceptions

28 May

And then it hits you – intersecting #SystemsSociology with #BusinessEngineering (Studies – one already out there as an #MBE)… howzabout kicking things up a few notches with a few splashes of Advanced #Simultaneity Research? (And making it the core of both the dissertation as well as the book?…)

#FeatureSchlock – thereby ye formed!

😛

20140530t0222CDT

image

Article: @Instagram Introduces Instagram Direct

20 Dec

Instagram Introduces Instagram Direct

http://techcrunch.com/2013/12/12/instagram-messaging/

(Via #FlipBook )Article: AT&T, T-Mobile, And Verizon Moto X All Get CyanogenMod 10.2 Experimental Nightly Builds

28 Oct

AT&T, T-Mobile, And Verizon Moto X All Get CyanogenMod 10.2 Experimental Nightly Builds

http://www.androidpolice.com/2013/10/28/att-t-mobile-and-verizon-moto-x-all-get-cyanogenmod-10-2-experimental-nightly-builds/

Corpus, Corpuscles, Nuggets and Kernels: #ThoughtTracing? or the #Accruals of Analytics? (#Discrete #Math Class)

21 Apr

So we have a write a paper – I start ‘nominating-through-research’…

“… Took a few mins at the Library (Well, mostly the #Gooracle), again, just in case we all end up ‘voting’ for this topic…


  • Computational linguistics, such as information extraction, corpus analysis, and so on

Meaning

  • Logic and set theory
  • Enumeration
  • Algorithmic concepts
  • Relations and functions
  •  Graph theory
  • Trees
  • Boolean Algebra

We get to pick a couple – NOT all from above, as they interpolate/intersect/buttress the main topic/headline


So someone actually sells these?

Our insatiable thirst for figuring stuff out?

Our insatiable thirst for figuring stuff out?

Corpus Analysis

One of the key areas of research in how computers can facilitate language learning is the field of corpus linguistics. Simply put, corpus linguistics is the study of language as expressed in samples (corpora) of “real world” texts.

In order to conduct a study of language (or develop a product) which is corpus-based, it is necessary to either gain access to, or develop a corpus of language, and then analyze the corpus using dedicated analysis tools such as concordancing programs. A corpus consists of a databank of natural texts, compiled from writing and/or a transcription of recorded speech. A concordancer is a software program which analyzes corpora and ranks or lists the results, letting us know which vocabulary words and phrases are most frequent (and thus most important to study). The main focus of corpus linguistics is to discover patterns of authentic language use through analysis of actual usage.

Many of our products and services are based on the careful development and analysis of focused-corpora, and depending on your specific needs, we can quickly create, analyze and provide output from corpora for a wide range of purposes.

For example, the popular NHK TV show “Eigo Shaberenaito” recently hired us to develop a list of essential English vocabulary words needed to be successful in business. Within less than 2 months, we were able to create a corpus of over 100 million words of current written and spoken business English, whose analysis yielded a list of 1000 high frequency business English words that are now being taught on their TV show and accompanying online and physical textbooks.

Retrieved from: http://www.charlie-browne.com/services/corpus-analysis/


Twitter?…

(as folks, yet again, companies like the one I service/support, they have placed a TON of resources on updating their Groupware offerings…)


“4. Corpus analysis

First, we checked the distribution of words frequencies in the corpus. A plot of word frequencies is presented in Figure 1. As we can see from the plot, the distribution of word frequencies follows Zipf’s law, which confirms a proper characteristic of the collected corpus.

Next, we used TreeTagger (Schmid, 1994) for English to tag all the posts in the corpus. We are interested in a difference of tags distributions between sets of texts (positive, negative, neutral).

To perform a pairwise comparison of tags distributions, we calculated the following value for each tag and two sets (i.e. positive and negative posts)…”

(And that’s where the need to peek at URL is a must…?)

Retrieved from: http://deepthoughtinc.com/wp-content/uploads/2011/01/Twitter-as-a-Corpus-for-Sentiment-Analysis-and-Opinion-Mining.pdf


Anyway, I have an hour before this Coffeehouse shuts down… so I hope above helps with the ‘platform’ – and nomination process?Go, #Corpus, GOOOO!”

So that’s one of my school entries – again, enthused by the possibilities! – and realities of a market that continues to elevate a lot of us?

Imaginative Recursions? On .JPEG Compression – and DCT (Discrete Cosine Transform) – Discrete Math Class

10 Apr

The question being…

” Describe an activity in terms of its iterative components, such as solving a Sudoku puzzle, a game of chess or backgammon.

Please mention any recursive elements that may occur…”

And the answer, of course…

Iterative Recursions?

Games?

Howzabout compressing Images?

– Enter Rasterization

Meaning, how do you think we’ve been shuffling, schlepping and otherwise compressing those beautifuls shots, first over a feeble 2,400 BPS Modem (took a while, BUT in most cases, it was worth the while!)

First JPEG’s (or .jpg’s as they’re commonly known) made a big hit back in the late nineties, allowing for higher definition images to be a part of a website – besides those cheesy .GIF’s which yes, could be animated, but once one wanted a bit more resolution and color, ended up becoming larger and larger files…

(and I tag this as a “Game” as I’m having a ball with a suite of about six apps, my “Pocket Photoshop” which I use to shoot, edit, composite, bubble and prep for my portfolio)

#Prismacolor Crayons? Look again!

#Prismacolor Crayons? Look again!

So what’s rasterization?

It’s a process whereby a grid is created, and yes, each single individual point (pixel) has a very unique identity; for a print image for example, values that include its Pantone Colorization, as to allow CMYK Color Separation, along with other types of metadata required for the downstream print equipment, like channels to allow for perfect outlines, “Alpha Channels” that allowed for literally split hairs to BE printed OR not, masks, etc, they were all layered in these .psd (Photoshop) files, which then had to be reconverted as .EPS (Encapsulated PostScript!… talk about MATH!)… so yes, how does one quickly shuffle a preview file?

“Typical usage

The JPEG compression algorithm is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where the amount of data used for an image is important, JPEG is very popular. JPEG/Exif is also the most common format saved by digital cameras.

On the other hand, JPEG may not be as well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images may be better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format. The JPEG standard actually includes a lossless coding mode, but that mode is not supported in most products.

As the typical use of JPEG is a lossy compression method, which somewhat reduces the image fidelity, it should not be used in scenarios where the exact reproduction of the data is required (such as some scientific and medical imaging applications and certain technical image processing work).

JPEG is also not well suited to files that will undergo multiple edits, as some image quality will usually be lost each time the image is decompressed and recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details. To avoid this, an image that is being modified or may be modified in the future can be saved in a lossless format, with a copy exported as JPEG for distribution.

JPEG compression

JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain (aka transform domain.) A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In laymen’s terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the over picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and losslessly packed into the output bitstream. Nearly all software implementations of JPEG permit user control over the compression-ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application.” (Wiki, 2013)

So maybe I do not play games… but I’m sure many in the audience are staring at these, right now!

Source: http://en.wikipedia.org/wiki/JPEG