How to Read Information From a Credit Card Reader Java

Today's blog post is a continuation of our contempo series on Optical Character Recognition (OCR) and computer vision.

In a previous web log post, we learned how to install the Tesseract binary and use information technology for OCR. We so learned how to cleanup images using basic image processing techniques to improve the output of Tesseract OCR.

However, as I've mentioned multiple times in these previous posts, Tesseract should not exist considered a general, off-the-shelf solution for Optical Character Recognition capable of obtaining high accurateness.

In some cases, information technology volition piece of work slap-up — and in others, it volition fail miserably.

A great example of such a use instance is credit card recognition, where given an input image,
we wish to:

  1. Detect the location of the credit card in the image.
  2. Localize the iv groupings of four digits, pertaining to the xvi digits on the credit card.
  3. Apply OCR to recognize the 16 digits on the credit card.
  4. Recognize the type of credit card (i.e., Visa, MasterCard, American Express, etc.).

In these cases, the Tesseract library is unable to correctly identify the digits (this is likely due to Tesseract not being trained on credit card example fonts). Therefore, nosotros need to devise our own custom solution to OCR credit cards.

In today's blog mail I'll be demonstrating how nosotros can utilize template matching equally a form of OCR to assistance us create a solution to automatically recognize credit cards and extract the associated credit card digits from images.

To learn more about using template matching for OCR with OpenCV and Python, simply keep reading.

Looking for the source code to this post?

Jump Right To The Downloads Department

Credit Bill of fare OCR with OpenCV and Python

Today's blog post is broken into three parts.

In the starting time section, we'll discuss the OCR-A font, a font created specifically to aid Optical Grapheme Recognition algorithms.

We'll then devise a computer vision and image processing algorithm that can:

  1. Localize the four groupings of four digits on a credit card.
  2. Extract each of these 4 groupings followed by segmenting each of the sixteen numbers individually.
  3. Recognize each of the xvi credit card digits past using template matching and the OCR-A font.

Finally, nosotros'll look at some examples of applying our credit bill of fare OCR algorithm to bodily images.

The OCR-A font

Figure one: The OCR-A font, originally developed to aid Optical Character Recognition systems (source).

The OCR-A font was designed in the late 1960s such that both (i) OCR algorithms at that time and (2) humans could easily recognize the characters The font is backed by standards organizations including ANSI and ISO among others.

Despite the fact that modern OCR systems don't need specialized fonts such as OCR-A, information technology is all the same widely used on ID cards, statements, and credit cards.

In fact, there are quite a few fonts designed specifically for OCR including OCR-B and MICR E-13B.

Effigy 2: The OCR-B font, an culling to OCR-A (source).

While you lot might not write a paper check too often these days, the next time you do, you'll see the MICR E-13B font used at the bottom containing your routing and account numbers. MICR stands for Magnetic Ink Grapheme Recognition lawmaking. Magnetic sensors, cameras, and scanners all read your checks regularly.

Figure 3: The MICR E-13B font normally found on bank checks (source).

Each of the to a higher place fonts have 1 matter in common — they are designed for easy OCR.

For this tutorial, we volition make a template matching organization for the OCR-A font, commonly plant on the front of credit/debit cards.

OCR via template matching with OpenCV

In this section we'll implement our template matching algorithm with Python + OpenCV to automatically recognize credit card digits.

In order to accomplish this, we'll need to utilise a number of paradigm processing operations, including thresholding, calculating gradient magnitude representations, morphological operations, and contour extraction. These techniques have been used in other blog posts to discover barcodes in images and recognize motorcar-readable zones in passport images.

Since there will exist many image processing operations applied to help us detect and excerpt the credit card digits, I've included numerous intermediate screenshots of the input prototype as it passes through our image processing pipeline.

These additional screenshots will give yous extra insight every bit to how we are able to chain together basic image processing techniques to build a solution to a computer vision project.

Allow's go ahead and get started.

Open up up a new file, proper noun it ocr_template_match.py , and we'll get to piece of work:

# import the necessary packages from imutils import contours import numpy as np import argparse import imutils import cv2          

Lines 1- 6 handle importing packages for this script. You will need to install OpenCV and imutils if you don't already have them installed on your machine. Template matching has been around awhile in OpenCV, and then your version (v2.4, v3.*, etc.) will likely work.

To install/upgrade imutils , simply utilise pip :

$ pip install --upgrade imutils          

Note: If you are using Python virtual environments (as all of my OpenCV install tutorials do), make sure you use the workon control to access your virtual environment first and then install/upgrade imutils .

Now that nosotros've installed and imported packages, we can parse our command line arguments:

# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--prototype", required=Truthful, 	assistance="path to input epitome") ap.add_argument("-r", "--reference", required=True, 	help="path to reference OCR-A image") args = vars(ap.parse_args())          

On Lines eight-fourteen we establish an argument parser, add 2 arguments, and parse them, storing equally the variable, args .

The 2 required command line arguments are:

  • --image : The path to the image to be OCR'd.
  • --reference : The path to the reference OCR-A image. This image contains the digits 0-9 in the OCR-A font, thereby allowing usa to perform template matching later in the pipeline.

Next let's ascertain credit card types:

# define a dictionary that maps the first digit of a credit carte du jour # number to the credit card type FIRST_NUMBER = { 	"three": "American Express", 	"4": "Visa", 	"five": "MasterCard", 	"6": "Discover Card" }          

Credit carte du jour types, such every bit American Express, Visa, etc., tin can exist identified by examining the beginning digit in the 16 digit credit bill of fare number. On Lines xvi-23 we define a dictionary, FIRST_NUMBER , which maps the first digit to the corresponding credit card type.

Permit'southward showtime our paradigm processing pipeline past loading the reference OCR-A image:

# load the reference OCR-A paradigm from disk, catechumen it to grayscale, # and threshold it, such that the digits appear as *white* on a # *black* background # and invert it, such that the digits announced as *white* on a *blackness* ref = cv2.imread(args["reference"]) ref = cv2.cvtColor(ref, cv2.COLOR_BGR2GRAY) ref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)[1]          

First, we load the reference OCR-A image (Line 29) followed by converting it to grayscale (Line 30) and thresholding + inverting it (Line 31). In each of these operations nosotros store or overwrite ref , our reference paradigm.

Figure 4: The OCR-A font for the digits 0-9. Nosotros will be using this font along with template matching to OCR credit carte du jour digits in images.

Figure 4 shows the effect of these steps.

At present permit'due south locate contours on our OCR-A font image:

# find contours in the OCR-A paradigm (i.e,. the outlines of the digits) # sort them from left to right, and initialize a dictionary to map # digit proper noun to the ROI refCnts = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL, 	cv2.CHAIN_APPROX_SIMPLE) refCnts = imutils.grab_contours(refCnts) refCnts = contours.sort_contours(refCnts, method="left-to-right")[0] digits = {}          

On Lines 36 and 37 we find the contours present in the ref image. Then, due to how OpenCV 2.4, 3, and 4 versions store the returned contour data differently, we check the version and make an appropriate change to refCnts on Line 38.

Next, we sort the contours from left-to-right equally well equally initialize a dictionary, digits , which maps the digit name to the region of interest (Lines 39 and twoscore).

At this indicate, we should loop through the contours, extract, and associate ROIs with their corresponding digits:

# loop over the OCR-A reference contours for (i, c) in enumerate(refCnts): 	# compute the bounding box for the digit, extract it, and resize 	# it to a stock-still size 	(x, y, w, h) = cv2.boundingRect(c) 	roi = ref[y:y + h, x:x + w] 	roi = cv2.resize(roi, (57, 88))  	# update the digits dictionary, mapping the digit name to the ROI 	digits[i] = roi          

On Line 43 we loop through the reference prototype contours. In the loop, i holds the digit name/number and c holds the contour.

We compute a bounding box effectually each contour, c , (Line 46) storing the (x, y)-coordinates and width/tiptop of the rectangle.

On Line 47 we excerpt the roi from ref (the reference image) using the bounding rectangle parameters. This ROI contains the digit. We resize each ROI on Line 48 to a fixed size of 57×88 pixels. Nosotros need to ensure every digit is resized to a stock-still size in order to apply template matching for digit recognition later in this tutorial.

We acquaintance each digit 0-9 (the dictionary keys) to each roi epitome (the dictionary values) on Line 51.

At this signal, we are done extracting the digits from our reference image and associating them with their corresponding digit proper noun.

Our next goal is to isolate the 16-digit credit card number in the input --paradigm . We need to find and isolate the numbers before we can initiate template matching to identify each of the digits. These epitome processing steps are quite interesting and insightful, especially if you have never adult an image processing pipeline before, then exist sure to pay close attending.

Let'south continue by initializing a couple structuring kernels:

# initialize a rectangular (wider than information technology is tall) and square # structuring kernel rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (nine, iii)) sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))          

You can call back of a kernel as a pocket-size matrix which we slide across the image to do (convolution) operations such as blurring, sharpening, edge detection, or other image processing operations.

On Lines 55 and 56 we construct ii such kernels — one rectangular and one square. We will employ the rectangular ane for a Top-hat morphological operator and the square ane for a closing functioning. Nosotros'll see these in action shortly.

Now let'due south prepare the image nosotros are going to OCR:

# load the input image, resize it, and catechumen information technology to grayscale paradigm = cv2.imread(args["image"]) prototype = imutils.resize(prototype, width=300) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)          

On Line 59 we load our control line statement paradigm which holds the photo of the credit card. Then, nosotros resize it to width=300 , maintaining the aspect ratio (Line 60), followed by converting information technology to grayscale (Line 61).

Let'southward take a look at our input image:

Effigy 5: The instance input credit card image that nosotros volition be OCR'ing in the remainder of this tutorial.

Followed by our resize and grayscale operations:

Figure 6: Converting the image to grayscale is a requirement prior to applying the residual of our epitome processing pipeline.

At present that our image is grayscaled and the size is consistent, permit's perform a morphological operation:

# apply a tophat (whitehat) morphological operator to detect low-cal # regions against a dark groundwork (i.e., the credit menu numbers) tophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)          

Using our rectKernel and our gray image, we perform a Top-chapeau morphological performance, storing the result as tophat (Line 65).

The Summit-hat operation reveals light regions against a dark background (i.due east. the credit bill of fare numbers) equally you can run across in the resulting image below:

Figure seven: Applying a tophat operations reveals light regions (i.east., the credit card digits) against a nighttime background.

Given our tophat image, let'south compute the gradient forth the x-direction:

# compute the Scharr slope of the tophat image, then calibration # the balance back into the range [0, 255] gradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0, 	ksize=-one) gradX = np.absolute(gradX) (minVal, maxVal) = (np.min(gradX), np.max(gradX)) gradX = (255 * ((gradX - minVal) / (maxVal - minVal))) gradX = gradX.astype("uint8")          

The adjacent stride in our endeavor to isolate the digits is to compute a Scharr gradient of the tophat epitome in the x-management. Nosotros complete this computation on Lines 69 and 70, storing the effect as gradX .

After computing the absolute value of each element in the gradX assortment, we take some steps to scale the values into the range [0-255] (as the image is currently a floating signal data type). To do this nosotros compute the minVal and maxVal of gradX (Line 72) followed by our scaling equation shown on Line 73 (i.east., min/max normalization). The last step is to convert gradX to a uint8 which has a range of [0-255] (Line 74).

The consequence is shown in the epitome below:

Figure 8: Computing the Scharr gradient magnitude representation of the image reveals vertical changes in the gradient.

Let's proceed to amend our credit menu digit finding algorithm:

# employ a endmost operation using the rectangular kernel to assist # cloes gaps in betwixt credit menu number digits, then apply # Otsu's thresholding method to binarize the image gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel) thresh = cv2.threshold(gradX, 0, 255, 	cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]  # use a 2d endmost performance to the binary image, once again # to assistance close gaps between credit bill of fare number regions thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)          

To close the gaps, we do a endmost functioning on Line 79. Notice that we use our rectKernel again. Subsequently nosotros perform an Otsu and binary threshold of the gradX image (Lines 80 and 81), followed by another closing operation (Line 85). The result of these steps is shown here:

Effigy 9: Thresholding our gradient magnitude representation reveals candidate regions" for the credit card numbers we are going to OCR.

Next let'southward find the contours and initialize the list of digit grouping locations.

# find contours in the thresholded paradigm, then initialize the # listing of digit locations cnts = cv2.findContours(thresh.re-create(), cv2.RETR_EXTERNAL, 	cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) locs = []          

On Lines 89-91 we find the contours and shop them in a list, cnts . Then, we initialize a listing to concord the digit group locations on Line 92.

Now let's loop through the contours while filtering based on the attribute ratio of each, assuasive the states to prune the digit group locations from other, irrelevant areas of the credit carte du jour:

# loop over the contours for (i, c) in enumerate(cnts): 	# compute the bounding box of the profile, then employ the 	# bounding box coordinates to derive the aspect ratio 	(x, y, w, h) = cv2.boundingRect(c) 	ar = due west / bladder(h)  	# since credit cards used a stock-still size fonts with 4 groups 	# of iv digits, nosotros tin can prune potential contours based on the 	# attribute ratio 	if ar > 2.v and ar < iv.0: 		# contours can further be pruned on minimum/maximum width 		# and meridian 		if (w > 40 and west < 55) and (h > 10 and h < twenty): 			# append the bounding box region of the digits group 			# to our locations listing 			locs.suspend((x, y, w, h))          

On Line 95 we loop through the contours the aforementioned way we did for the reference paradigm. Afterward calculating the bounding rectangle for each contour, c (Line 98), we calculate the aspect ratio, ar , by dividing the width by the height (Line 99).

Using the aspect ratio, we analyze the shape of each contour. If ar is between two.5 and four.0 (wider than it is tall), also as the west between twoscore and 55 pixels and h between ten and 20 pixels, nosotros append the bounding rectangle parameters in a convenient tuple to locs (Lines 101-110).

Note: These the values for the aspect ratio and minimum width and summit were constitute experimentally on my set of input credit card images. You may need to alter these values for your ain applications.

The post-obit image shows the groupings that we have found — for demonstration purposes, I had OpenCV describe a bounding box around each group:

Figure ten: Highlighting the iv groups of four digits (sixteen overall) on a credit card.

Next, nosotros'll sort the groupings from left to correct and initialize a list for the credit card digits:

# sort the digit locations from left-to-correct, so initialize the # list of classified digits locs = sorted(locs, fundamental=lambda 10:x[0]) output = []          

On Line 114 we sort the locs co-ordinate to the x-value and then they will exist ordered from left to correct.

We initialize a listing, output , which will hold the image's credit card number on Line 115.

Now that we know where each group of four digits is, allow's loop through the 4 sorted groupings and determine the digits therein.

This loop is rather long and is cleaved down into three code blocks — here is the first block:

# loop over the 4 groupings of 4 digits for (i, (gX, gY, gW, gH)) in enumerate(locs): 	# initialize the list of group digits 	groupOutput = []  	# extract the grouping ROI of four digits from the grayscale image, 	# then apply thresholding to segment the digits from the 	# background of the credit card 	group = gray[gY - 5:gY + gH + 5, gX - 5:gX + gW + 5] 	grouping = cv2.threshold(group, 0, 255, 		cv2.THRESH_BINARY | cv2.THRESH_OTSU)[i]  	# detect the contours of each private digit in the group, 	# then sort the digit contours from left to right 	digitCnts = cv2.findContours(group.re-create(), cv2.RETR_EXTERNAL, 		cv2.CHAIN_APPROX_SIMPLE) 	digitCnts = imutils.grab_contours(digitCnts) 	digitCnts = contours.sort_contours(digitCnts, 		method="left-to-right")[0]          

In the first block for this loop, we extract and pad the grouping by 5 pixels on each side (Line 125), apply thresholding (Lines 126 and 127), and notice and sort contours (Lines 129-135). For the details, be sure to refer to the lawmaking.

Shown below is a single group that has been extracted:

Figure 11: An example of extracting a single group of digits from the input credit card for OCR.

Let'due south continue the loop with a nested loop to practise the template matching and similarity score extraction:

            # loop over the digit contours 	for c in digitCnts: 		# compute the bounding box of the individual digit, extract 		# the digit, and resize information technology to have the aforementioned fixed size as 		# the reference OCR-A images 		(x, y, westward, h) = cv2.boundingRect(c) 		roi = grouping[y:y + h, x:x + west] 		roi = cv2.resize(roi, (57, 88))  		# initialize a list of template matching scores	 		scores = []  		# loop over the reference digit proper name and digit ROI 		for (digit, digitROI) in digits.items(): 			# apply correlation-based template matching, take the 			# score, and update the scores listing 			result = cv2.matchTemplate(roi, digitROI, 				cv2.TM_CCOEFF) 			(_, score, _, _) = cv2.minMaxLoc(issue) 			scores.suspend(score)  		# the classification for the digit ROI will be the reference 		# digit proper name with the *largest* template matching score 		groupOutput.append(str(np.argmax(scores)))          

Using cv2.boundingRect we obtain parameters necessary to extract a ROI containing each digit (Lines 142 and 143). In order for template matching to work with some caste of accuracy, we resize the roi to the same size as our reference OCR-A font digit images (57×88 pixels) on Line 144.

Nosotros initialize a scores list on Line 147. Retrieve of this as our confidence score — the higher it is, the more likely it is the right template.

Now, permit's loop (third nested loop) through each reference digit and perform template matching. This is where the heavy lifting is washed for this script.

OpenCV, has a handy function called cv2.matchTemplate in which you supply two images: 1 beingness the template and the other being the input image. The goal of applying cv2.matchTemplate to these two images is to determine how similar they are.

In this case nosotros supply the reference digitROI image and the roi from the credit card containing a candidate digit. Using these 2 images we call the template matching role and store the result (Lines 153 and 154).

Next, we extract the score from the consequence (Line 155) and append information technology to our scores list (Line 156). This completes the inner-most loop.

Using the scores (one for each digit 0-9), nosotros take the maximum score — the maximum score should be our correctly identified digit. We observe the digit with the max score on Line 160, grabbing the specific index via np.argmax . The integer name of this index represents the most-likely digit based on the comparisons to each template (once again, keeping in heed that the indexes are already pre-sorted 0-9).

Finally, let'southward draw a rectangle around each grouping and view the credit card number on the image in cerise text:

            # draw the digit classifications around the group 	cv2.rectangle(prototype, (gX - 5, gY - five), 		(gX + gW + v, gY + gH + five), (0, 0, 255), two) 	cv2.putText(image, "".bring together(groupOutput), (gX, gY - fifteen), 		cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), 2)  	# update the output digits list 	output.extend(groupOutput)          

For the third and final block for this loop, we draw a five-pixel padded rectangle around the group (Lines 163 and 164) followed by drawing the text on the screen (Lines 165 and 166).

The last stride is to append the digits to the output list. The Pythonic fashion to practice this is to utilize the extend function which appends each chemical element of the iterable object (a list in this instance) to the end of the list.

To run across how well the script performs, let's output the results to the terminal and display our image on the screen.

# display the output credit card information to the screen print("Credit Card Blazon: {}".format(FIRST_NUMBER[output[0]])) impress("Credit Card #: {}".format("".bring together(output))) cv2.imshow("Image", prototype) cv2.waitKey(0)          

Line 172 prints the credit menu blazon to the console followed past printing the credit card number on the subsequent Line 173.

On the last lines, we brandish the image on the screen and look for any central to be pressed before exiting the script Lines 174 and 175.

Take a second to congratulate yourself — you fabricated it to the end. To epitomize (at a high level), this script:

  1. Stores credit card types in a lexicon.
  2. Takes a reference prototype and extracts the digits.
  3. Stores the digit templates in a dictionary.
  4. Localizes the 4 credit card number groups, each holding four digits (for a total of xvi digits).
  5. Extracts the digits to be "matched".
  6. Performs template matching on each digit, comparing each private ROI to each of the digit templates 0-9, whilst storing a score for each attempted match.
  7. Finds the highest score for each candidate digit, and builds a list called output which contains the credit carte number.
  8. Outputs the credit carte number and credit bill of fare blazon to our terminal and displays the output epitome to our screen.

It'southward now time to see the script in action and check on our results.

Credit card OCR results

Now that we accept coded our credit card OCR system, let'southward give it a shot.

We obviously cannot use real credit carte du jour numbers for this example, so I've gathered a few example images of credit cards using Google. These credit cards are apparently fake and for demonstration purposes simply.

However, you can employ the same techniques in this web log post to recognize the digits on actual, real credit cards.

To encounter our credit menu OCR system in action, open up a terminal and execute the following control:

$ python ocr_template_match.py --reference ocr_a_reference.png \ 	--paradigm images/credit_card_05.png Credit Card Type: MasterCard Credit Card #: 5476767898765432          

Our first result epitome, 100% correct:

Effigy 12: Applying template matching with OpenCV and Python to OCR the digits on a credit card.

Detect how nosotros were able to correctly characterization the credit carte du jour equally MasterCard, simply by inspecting the first digit in the credit card number.

Permit's try a 2nd image, this fourth dimension a Visa:

$ python ocr_template_match.py --reference ocr_a_reference.png \ 	--image images/credit_card_01.png Credit Card Type: Visa Credit Menu #: 4000123456789010          
Effigy 13: A second example of OCR'ing digits using Python and OpenCV.

In one case over again, we were able to correctly OCR the credit card using template matching.

How near another epitome, this time from PSECU, a credit union in Pennsylvania:

$ python ocr_template_match.py --reference ocr_a_reference.png \ 	--image images/credit_card_02.png Credit Bill of fare Blazon: Visa Credit Card #: 4020340002345678          
Figure 14: Our system is correctly able to find the digits on the credit card, and so employ template matching to recognize them.

Our OCR template matching algorithm correctly identifies each of the sixteen digits. Given that each of the 16 digits were correctly OCR'd, we can also characterization credit menu as a Visa.

Here's another MasterCard example epitome, this 1 from Bed, Bath, & Beyond:

$ python ocr_template_match.py --reference ocr_a_reference.png \ 	--image images/credit_card_03.png Credit Bill of fare Type: MasterCard Credit Card #: 5412751234567890          
Figure 15: Regardless of credit card design and type, nosotros can withal detect the digits and recognize them using template matching.

No problems for our template matching OCR algorithm here!

As our terminal case, allow's utilise some other Visa:

$ python ocr_template_match.py --reference ocr_a_reference.png \ 	--image images/credit_card_04.png Credit Card Type: Visa Credit Carte du jour #: 4000123456789010          
Figure 16: A final example of applying OCR with Python and OpenCV.

In each of the examples in this blog post, our template matching OCR script using OpenCV and Python correctly identified each of the sixteen digits 100% of the time.

Furthermore, template matching is also a very fast method when comparison digits.

Unfortunately, we were non able to apply our OCR images to real credit carte images, so that certainly raises the question if this method would be reliable on bodily, real-world images. Given changes in lighting status, viewpoint angle, and other general noise, information technology's likely that we might need to have a more machine learning oriented approach.

Regardless, at to the lowest degree for these case images, we were able to successfully apply template matching every bit a form of OCR.

What's side by side? I recommend PyImageSearch University.

Course data:
35+ total classes • 39h 44m video • Last updated: February 2022
★★★★★ four.84 (128 Ratings) • 3,000+ Students Enrolled

I strongly believe that if you had the right teacher you could master calculator vision and deep learning.

Do yous think learning reckoner vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve circuitous mathematics and equations? Or requires a degree in computer science?

That's not the instance.

All you need to primary figurer vision and deep learning is for someone to explain things to you lot in elementary, intuitive terms. And that's exactly what I exercise. My mission is to change educational activity and how complex Artificial Intelligence topics are taught.

If you're serious about learning computer vision, your adjacent stop should be PyImageSearch Academy, the most comprehensive calculator vision, deep learning, and OpenCV form online today. Here you'll learn how to successfully and confidently apply reckoner vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you'll find:

  • 35+ courses on essential reckoner vision, deep learning, and OpenCV topics
  • ✓ 35+ Certificates of Completion
  • 39h 44m on-demand video
  • Brand new courses released every calendar month , ensuring yous can keep up with state-of-the-art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all lawmaking examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Access to centralized code repos for all 500+ tutorials on PyImageSearch
  • Easy one-click downloads for code, datasets, pre-trained models, etc.
  • ✓ Access on mobile, laptop, desktop, etc.

Click here to bring together PyImageSearch Academy

Summary

In this tutorial we learned how to perform Optical Character Recognition (OCR) using template matching via OpenCV and Python.

Specifically, we applied our template matching OCR arroyo to recognize the type of a credit card along with the xvi credit carte digits.

To accomplish this, nosotros bankrupt our image processing pipeline into 4 steps:

  1. Detecting the four groups of iv numbers on the credit bill of fare via various image processing techniques, including morphological operations, thresholding, and profile extraction.
  2. Extracting each of the individual digits from the 4 groupings, leading to 16 digits that need to be classified.
  3. Applying template matching to each digit by comparison information technology to the OCR-A font to obtain our digit classification.
  4. Examining the first digit of the credit card number to make up one's mind the issuing company.

Later on evaluating our credit card OCR system, we institute it to be 100% accurate provided that the issuing credit carte company used the OCR-A font for the digits.

To extend this application, you would want to get together real images of credit cards in the wild and potentially railroad train a machine learning model (either via standard characteristic extraction or training or Convolutional Neural Network) to further ameliorate the accurateness of this organization.

I promise you enjoyed this web log post on OCR via template matching using OpenCV and Python.

To be notified when hereafter tutorials are published hither on PyImageSearch, be sure to enter your electronic mail address in the grade below!

Download the Source Lawmaking and Gratuitous 17-page Resource Guide

Enter your e-mail address below to get a .zip of the lawmaking and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside yous'll detect my paw-picked tutorials, books, courses, and libraries to help you master CV and DL!

wertzhopoick.blogspot.com

Source: https://pyimagesearch.com/2017/07/17/credit-card-ocr-with-opencv-and-python/

0 Response to "How to Read Information From a Credit Card Reader Java"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel