Quantcast
Channel: Active questions tagged gcc - Stack Overflow
Viewing all articles
Browse latest Browse all 22113

THEANO_FLAGS error using GCC with Keras Theano on AWS Lambda

$
0
0

I was trying to get Keras (theano) to process my ML model on AWS Lambda. The inference was taking too long (about 60 seconds per image!), so I found a possible solution on the internet: install GCC binaries.

The problem is that I still get an error when I try to get GCC binaries to work on Lambda. The error is related to the environment variable I set for THEANO_FLAGS and perhaps the GCC installation location.

I'm using python 3.6 and except for lib and bin for gcc, I'm loading lib for keras, numpy, etc using Lambda layers.

My environment variables:

  • CPATH=$CPATH:/tmp/lib:/tmp/lib/python3.6
  • GCC_EXEC_PREFIX=/tmp/gcc/lib/gcc/
  • KERAS_BACKEND=theano
  • LIBRARY_PATH=$LIBRARY_PATH:/tmp/lib
  • THEANO_FLAGS=base_compiledir=/tmp/.theano,cxx=/tmp/gcc/bin/g++

The last one, THEANO_FLAGS, gives me this error when I run it on Lambda:

module initialization error: Compilation failed (return status=1):

/tmp/.theano/compiledir_Linux-4.14-amzn2.x86_64-x86_64-with-glibc2.2.5-x86_64-3.6.9-64/lazylinker_ext/mod.cpp:1:

fatal error: Python.h: No such file or directory. compilation terminated.. 

The code:

import os
import json
import boto3

import shutil
import stat
import zipfile
from six.moves import urllib


# Making sure to remove earlier lock. Can happen if you are playing around and changing functions rapidly.
if os.path.exists("/tmp/.theano"):
   shutil.rmtree("/tmp/.theano")

S3_KEY = os.environ.get('S3_KEY', '')
S3_SECRET = os.environ.get('S3_SECRET', '')
S3_BUCKET = os.environ.get('S3_BUCKET', '')

key = os.environ.get('key', '')

s3 = boto3.client("s3", aws_access_key_id=S3_KEY, aws_secret_access_key=S3_SECRET, region_name='us-east-2')

GCC_URL = "my_gcc_url"
LIB_URL = "my_lib_for_gcc_url"

def download(url, local_fpath):
    urllib.request.urlretrieve(url, local_fpath)

def make_gcc_executable():
    for fpath in os.listdir("/tmp/gcc/bin"):
        fpath = os.path.join("/tmp/gcc/bin", fpath)
        st = os.stat(fpath)
        os.chmod(fpath, st.st_mode | stat.S_IXOTH | stat.S_IXGRP | stat.S_IXUSR)

    for fpath in os.listdir("/tmp/gcc/libexec/gcc/x86_64-linux-gnu/4.6.4"):
        fpath = os.path.join("/tmp/gcc/libexec/gcc/x86_64-linux-gnu/4.6.4", fpath)
        st = os.stat(fpath)
        os.chmod(fpath, st.st_mode | stat.S_IXOTH | stat.S_IXGRP | stat.S_IXUSR)


# Download GCC and uncompress it.
download(GCC_URL, "/tmp/gcc.zip")
zipfile.ZipFile("/tmp/gcc.zip").extractall("/tmp/gcc")

make_gcc_executable()

# Download Include folder and uncompress it
download(LIB_URL, "/tmp/lib.zip")
zipfile.ZipFile("/tmp/lib.zip").extractall("/tmp/lib")

import numpy as np
from keras.models import load_model
from keras.preprocessing import image


def lambda_handler(event, context):

    # DOWNLOAD CLASSIFIER AND IMAGE FROM S3
    s3.download_file(S3_BUCKET, key, "/tmp/classifier.hdf5")
    s3.download_file(S3_BUCKET, '15.jpg', '/tmp/image2classify.jpg')

    # LOAD CLASSIFIER
    classifier = load_model("/tmp/classifier.hdf5")

    # PROCESS THE IMAGE    
    test_image = image.load_img("/tmp/image2classify.jpg",
                                target_size = (192, 192),
                                color_mode = 'grayscale')
    test_image = image.img_to_array(test_image)
    test_image /= 255
    test_image = np.expand_dims(test_image, axis = 0)

    # MAKE A PREDICTION
    prediction = classifier.predict(test_image)

    return {
        'statusCode': 200,
        'body': json.dumps('Hello World!!!')
    }

Viewing all articles
Browse latest Browse all 22113

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>